Basic Electronics – 2

Passive Components

RESISTORS

To oppose the flow of electrons ( current). The symbols are shown below.

Resistance is measured in units called “Ohm”. 1000 ohms is shown as 1k ohm (103 ohm) and 1000 k ohm is shown as M.ohms (106ohm).

Resistors can be broadly of two types.

• Fixed Resistors and Variable Resistors.

Fixed Resistors:

Carbon Film (5%, 10% tolerance) and Metal Film Resistors (1%,2% tolerances) and wire wound

resistors. A fixed resistor is one for which the value of its resistance is specified and cannot be varied in general.

Resistance Value

The resistance value is displayed using the color code ( the colored bars/the colored stripes), because the average resistor is too small to have the value printed on it with numbers. The resistance value is a discrete value.

For example, the values [1], [2.2], [4.7] and [10] are used in a typical situation.

Types of Resistance

CARBON FILM RESISTORS

This is the most general purpose, cheap resistor. Usually the tolerance of the resistance value is ±5%. Power ratings of 1/8W, 1/4W and 1/2W are frequently used. The disadvantage of using carbon film resistors is that they tend to be electrically noisy.

METAL FILM RESISTORS

Metal film resistors are used when a higher tolerance (more accurate value) is needed. Nichrome(Ni-Cr) is generally used for the material of resistor. They are much more accurate in value than carbon

film resistors. They have about ±0.05% tolerance.

OTHER RESISTORS

There is another type of resistor called the wire wound resistor. A wire wound resistor is made of metal

resistance wire, and because of this, they can be manufactured to precise values. Also, high-wattage resistors can be made by using a thick wire material. Wire wound resistors cannot be used for high-frequency circuits.

Ceramic Resistor

Another type of resistor is the Ceramic resistor. These are wire wound resistors in a ceramic case, strengthened with a special cement. They have very high power ratings, from 1 or 2 watts to dozens of watts. These resistors can become extremely hot when used for high power applications, and this must be taken into account when designing the circuit.

SINGLE-IN LINE NETWORK RESISTORS

It is made with many resistors of the same value, all in one package. One side of each resistor is connected with one side of all the other resistors inside. One example of its use would be to control the current in a circuit powering many light emitting diodes (LEDs). The face value of the resistance is printed.

4S-RESISTOR NETWORK

The 4S indicates that the package contains 4 independent resistors that are not wired together inside. The housing has eight leads instead of nine.

VARIABLE RESISTORS

There are two general ways in which variable resistors are used. One is the variable resistor whose value is easily changed, like the volume adjustment of Radio. The other is semi-fixed resistor that is not meant to be adjusted by anyone but a technician. It is used to adjust the operating condition of the circuit by the technician.

Semi-fixed resistors are used to compensate for the inaccuracies of the resistors, and to fine-tune a circuit. The rotation angle of the variable resistor is usually about 300 degrees. Some variable resistors must be turned many times( multi-turn Pot) to use the whole range of resistance they offer.

This allows for very precise adjustments of their value. These are called “Potentiometers” or “Trimmer Potentiometers” or “presets”.

LIGHT DEPENDENT RESISTANCE (LDR)

Some components can change resistance value by changes in the amount of light falling on them. One type is the Cadmium Sulfide Photocell. It is a kind of resistor, whose value depends on the amount of light falling on it. When in darkness its resistance if very large and as more and more light falls on it its resistance becomes smaller and smaller.

There are many types of these devices. They vary according to light sensitivity, size,  resistance value etc.

THERMISTOR

They are thermally sensitive resistor. The resistance value of the thermistor changes according to temperature. They are used as a temperature sensor. There are generally two types of thermistors, with Negative Temperature Coefficient(NTC) Positive Temperature Coefficient(PTC). The resistance of NTC thermistors decreases on heating while that of PTC thermistors increases.

ELECTRIC POWER RATING

For example, to power a 3V circuit using a 12V supply, using only a resistor, then we need to calculate the power rating of the resistor as well as the resistance value. The current consumed by the 5V circuit needs to be known.

Assume the current consumed is 250 mA (milliamps) in the above example. That means 9V (=12-3 V) must be dropped with the resistor. The resistance value of the resistor becomes 9V / 0.25A = 36(ohm).

The consumption of electric power for this resistor becomes 0.25A x 0.25A x 36ohm = 2.25W. Thus the selection of resistors depends on two factors namely tolerance and electric power ratings.

OHM’S LAW

Important and useful law.The current(I) flowing through a conductor is proportional to the voltage (V) applied across its ends. This can be written in algebraic form as V ∝ I Or V = IR where R is the proportionality constant. R is called Resistance and is measured in ‘Ohms’ ( Ω ).

Usually resistors are also specified in circuits in kilo Ohms(kΩ) and Mega Ohms(MΩ). The other useful relationships are V = RI, and R=V/I.

Security Testing

Security Testing

What is Security testing?

Security testing is a process to determine that an information system protects data and maintains functionality as intended. It is the process that determines that confidential data stays confidential and users can perform only those tasks that they are authorized to perform. Security testing covers confidentiality, integrity, authentication, availability, authorization and non-repudiation.

Confidentiality – A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.

Integrity- A measure intended to allow the receiver to determine that the information which it is providing is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.

Authentication-This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labeling claims to be, or assuring that a computer program is a trusted one.

Authorization- The process of determining that a requester is allowed to receive a service or perform an operation.
Access control is an example of authorization…….

Availability-Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.

Non-repudiation- In reference to digital security, nonrepudiation means to ensure that a transferred message has been sent and received by the parties claiming to have sent and received the message. Nonrepudiation is a way to guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message.

Given enough time and resources, good security testing will ultimately penetrate a system.

There are some questions that need to be answered before diving deep into security testing. These are as follows:

What is “Vulnerability”?
What is “URL manipulation”?
What is “SQL injection”?
What is “XSS (Cross Site Scripting)”?
What is “Spoofing”?

Basic Electronics – 1

Introduction

Electronic component can be divided into 2 types: Active and Passive components

Resistors and Capacitors etc. are known as passive components because they can only attenuate the electrical voltage and signals and cannot amplify.

Devices like transistors and operational amplifier(op Amps)can amplify or increase the amplitude and energy associated with the signals and so are termed as Active components.

Apart from components and circuits we must also have familiarity with some of the essential electronic measuring instruments like multimeter, regulated power supplies, function generators and oscilloscopes etc.

Basic electronics - Part 1

Solving software performance problems

How to solve software performance problems ?

1.  Define your Objective -where you need to be?

It is surprising that many projects do not have well-defined performance objectives. When we
ask what the performance objectives are, we very often get a response like “as fast as possible.” An objective such as this is not useful as you cannot determine when you achieve it.

You should define precise, quantitative, measurable performance objectives. Performance objectives can be measured using different performance counters like response time, throughput, or constraints on resource usage.
Example – “The response time for a request should be < 2 second with up to 1,000 users.”
or
“CPU utilization should be less than 65% for a peak load of 1,000 requests/second.”

When defining performance objectives, one should not forget future. For example, current performance objective may be to process 1,000 request/second. However, down the line in two years, it may need to be able to process 10,000 request/second.
It is a good idea to consider future uses of your software so that you can anticipate these changes
and build in the necessary scalability.

2.  Find out where you are Now?

You should measure the current counter values for all possible use cases of the application.

3.  Can you achieve your objectives?

Before you start tuning your software, it is a good idea to see if you can actually achieve your objectives by tuning. If the difference between where you are now and where you need to be is small, then tuning will probably help. You may even be able to achieve your performance objectives without making changes to the code by tuning operating system parameters, network configuration, file placements, and so on. If not, you may need to tune the software. This may require software re-factoring or re-creation using the performance principles and patterns. Optimize algorithms and data structures and/or modify program code to use more efficient constructs.

Some simple calculations can help determine if performance objective can be achieved by tuning the software. If it is, then you can proceed with confidence as you tune the software.

4.  Plan for Achieving Your Objectives

Compiling and Installing Custom Linux Kernel

Compiling and Installing Custom Linux Kernel
The below mentioned steps will work in Ubuntu or debian like systems. I have tested it on Ubuntu 10.10.
Download and unzip(preparing)
sudo apt-get update
sudo apt-get install kernel-package libncurses5-dev fakeroot wget bzip2
Download the kernel sources and unzip as shown below.
sudo tar xjf linux-2.6.18.1.tar.bz2sudo ln -s linux-2.6.18.1 linuxsudo cd /usr/src/linux

Configuring the Kernel
sudo cp /boot/config-`uname -r` ./.config
sudo make menuconfig
Then browse through the kernel configuration menu and make your choices. When you are finished and select Exit, answer the following question (Do you wish to save your new kernel configuration?) with Yes:
Build the Kernel
sudo make-kpkg cleansudo fakeroot make-kpkg –initrd –append-to-version=-custom_1.0 kernel_image kernel_headers
After –append-to-version= you can write any string that helps you identify the kernel, but it must begin with a minus (-) and must not contain whitespace.
Now be patient, the kernel compilation can take some hours, depending on your kernel configuration and your processor speed.
Install the new Kernel
sudo cd /usr/srcsudo ls -l
This will list the kernel files if everything was fine.
Install them like this:
sudo dpkg -i sudo dpkg -i
Now reboot the system
sudo shutdown -r now
If everything goes well, it should come up with the new kernel. You can check if it’s really using your new kernel by running
sudo uname -r

Unistalling the Kernel
Remove files
  1. /boot/vmlinuz*KERNEL-VERSION*
  2. /boot/initrd*KERNEL-VERSION*
  3. /boot/System-map*KERNEL-VERSION*
  4. /boot/config-*KERNEL-VERSION*
  5. /lib/modules/*KERNEL-VERSION*/
  6. /var/lib/initramfs-tools/
Run the command
sudo update-initramfs -k all -u

No Sound from Ubuntu linux or Mint

No Sound from Ubuntu linux or Mint
Searched google for possible answer to the problem but solution mentioned was not sufficient to solve my issues. I wasted couple of weekends resolving this issue.Finally I solved by removing PulseAudio and ALSA packages and installing OSS(Open Sound System)
What is OSS?OSS provides low -level audio drivers for users and a common API(application program interface) for developers. Unbutu and Mint by default uses ALSA ( Advanced Linux Sound Architecture) to provide audio drivers.

Does OSS support my hardware?

Check the list of supported hardware from the below link.
http://opensound.hg.sourceforge.net/hgweb/opensound/opensound/file/3db750724c2d/devlists/Linux

Preparing to install OSS

1. REMOVE Pulseaudio packages

sudo apt-get purge pulseaudio gstreamer0.10-pulseaudio

2. Removing ALSA packages

sudo /etc/init.d/alsa-utils stop

sudo apt-get remove alsa-base alsa-utils

3. Blacklisting ALSA Kernel Modules

sudo dpkg-reconfigure linux-sound-base

4. Installing Prerequisite Packages

The second command contains some recommended packages.

sudo apt-get install -y binutils libgtk2.0-0 sed gcc libc6

sudo apt-get install -y libesd0 libsdl1.2debian-oss

 

Installing OSS

1. Installing from DEB File

Download the OSS deb file from the 4front website(http://www.opensound.com/download.cgi). Before you install OSS, Reboot your system so that the ALSA modules will not load or interfere with it. When you log back in, use the terminal to install the OSS deb file (GDebi fails to install this .deb for some reason)

sudo dpkg -i oss-linux*.deb

Configuring Applications to Use OSS

Type ossxmix in your terminal to launch the mixer.

YOU ARE DONE

“I have not failed. I’ve just found 10,000 ways that won’t work” –Thomas Edison

Statistical Principals for the Performance Tester

Over the years I found that members of software development teams, developers, testers, administrators and managers alike have an insufficient grasp on how to apply mathematics or interpret statistical data on the job.
As performance testers, we must know and be able to apply certain mathematical and statistical concepts.
Exemplar Data Sets
This section refers to three exemplar data sets for the purposes of illustration.
Data Sets Summary
The following is a summary of Data Set A, B, and C.
Sample Size
Min.
Max.
Avg
Median
Normal
Mode
95th %
Std Dev.
Data Set A
100
1
7
4
4
4
4
6
1.5
Data Set B
100
1
16
4
1
3
1
16
6.0
Data Set C
100
0
8
4
4
1
3
8
2.6
Data Set A
100 total data points, distributed as follows:
●      5 data points have a value of 1.
●      10 data points have a value of 2.
●      20 data points have a value of 3.
●      30 data points have a value of 4.
●      20 data points have a value of 5.
●      10 data points have a value of 6.
●      5 data points have a value of 7.
Data Set B

100 total data points, distributed as follows:

●      80 data points have a value of 1.
●      20 data points have a value of 16.
Data Set C
  100 total data points, distributed as follows:
●      11 data points have a value of 0.
●      10 data points have a value of 1.
●      11 data points have a value of 2.
●      13 data points have a value of 3.
●      11 data points have a value of 4.
●      11 data points have a value of 5.
●      11 data points have a value of 6.
●      12 data points have a value of 7.
●      10 data points have a value of 8.
Averages
Also known as arithmetic mean, or mean for short, the average is probably the most commonly used and most commonly misunderstood statistic of them all. Just add up all the numbers and divide by how many numbers you just added-what could be simpler?
When it comes to performance testing, in this example, Data Sets A, B, and C each have an average of exactly 4.
 In terms of application response times, these sets of data have extremely different meanings.
Given a response time goal of 5 seconds, looking at only the average of these sets, all three seem to meet the goal.
Looking at the data, however, shows that none of the data sets is composed only of data that meets the goal, and that Data Set B probably demonstrates some kind of performance anomaly.
Use caution when using averages to discuss response times, and, if at all possible, avoid using averages as your only reported statistic.
Percentiles
It is a straightforward concept easier to demonstrate than define. Consider the 95th percentile as an example. If you have 100 measurements ordered from greatest to least, and you count down the five largest measurements, the next largest measurement represents the 95th percentile of those measurements. For the purposes of response times, this statistic is read “Ninety-five percent of the simulated users experienced a response time of this value or less under the same conditions as the test execution.”
The 95th percentile of data set B above is 16 seconds. Obviously this does not give the impression of achieving our five-second response-time goal. Interestingly, this can be misleading as well: If we were to look at the 80th percentile on the same data set, it would be one second. Despite this possibility, percentiles remain the statistic that I find to be the most effective most often. That said, percentile statistics can stand alone only when used to represent data that’s uniformly or normally distributed and has an acceptable number of outliers.
Uniform Distributions
Uniform distribution is a term that represents a collection of data roughly equivalent to a set of random numbers that are evenly distributed between the upper and lower bounds of the data set. The key is that every number in the data set is represented approximately the same number of times. Uniform distributions are frequently used when modeling user delays, but aren’t particularly common results in actual response-time data. I’d go so far as to say that uniformly distributed results in response-time data are a pretty good indicator that someone should probably double-check the test or take a hard look at the application.
Normal Distributions
Also called a bell curve, a data set whose member data are weighted toward the center (or median value) is a normal distribution. When graphed, the shape of the “bell” of normally distributed data can vary from tall and narrow to short and squat, depending on the standard deviation of the data set; the smaller the standard deviation, the taller and more narrow the bell. Quantifiable human activities often result in normally distributed data. Normally distributed data is also common for response time data.
Standard Deviations
By definition, one standard deviation is the amount of variance within a set of measurements that encompasses approximately the top 68 percent of all measurements in the set; what that means in English is that knowing the standard deviation of your data set tells you how densely the data points are clustered around the mean. Simply put, the smaller the standard deviation, the more consistent the data. To illustrate, the standard deviation of data set A is approximately .7, while the standard deviation of data set B is approximately 6.
Another rule of thumb is this: Data with a standard deviation greater than half of its mean should be treated as suspect.
Statistical Significance
Mathematically calculating statistical significance, also known as reliability. Whenever possible, ensure that you collect at least 100 measurements from at least two independent tests.
While there’s no hard-and-fast rule about how to decide which results are statistically similar without complex equations that call for volumes of data, try comparing results from at least five test executions and apply these rules to help you determine whether or not test results are similar enough to be considered reliable if you’re not sure after your first two tests:
1.    If more than 20 percent (or one out of five) of the test execution results appear not to be similar to the rest, something is generally wrong with either the test environment, the application or the test itself.
2.    If a 95th percentile value for any test execution is greater than the maximum or less than the minimum value for any of the other test executions, it’s probably not statistically similar.
3.    If measurement from a test is noticeably higher or lower, when charted side-by-side, than the results of the rest of the test executions, it’s probably not statistically similar.
4.    If a single measurement category (for example, the response time for a specific object) in a test is noticeably higher or lower, when charted side-by-side with all the rest of the test execution results, but the results for all the rest of the measurements in that test are not, the test itself is probably statistically similar.

Economics of test automation

How to calculate the cost of test automation:
Cost of test automation = Cost of tool(s) + Labor costs of script creation + Labor costs of script maintenance
 
If a test script will be run every week for the next two years, automate the test if the cost of automation is less than the cost of manually executing the test 104 times.
Automate if:
Cost of automation  <  Cost of manually executing the test as many times as the automated test will
be executed 

 

Monitoring Windows server using Nagios

Monitoring windows server require installation on NSClient ++ on windows host.

How it works?

For example disk space usage needs to be monitored on windows host.
1. Nagios will execute check_nt command on nagios-server and request it to monitor disk usage on windows machine.
2. Check_nt on nagios-server will contact the NSClient++ service on remote windows host  and request it to execute USEDISKSPACE on remote host. The result will be retured back to check_nt on nagios-server  by NSClient++ daemon.

How to Install and configure NSClient++?

1. Download and install NSClient ++ from http://nsclient.org/download/
2. Modify the NSClient++ service
Type services.msc in run and  then double click on NSClient++ service in the list.  Select the check-box that says “Allow service to interact with desktop”

3. Modify the Nsclient.ini file
Edit the C:\Program Files\NSClient++\NSC.ini file
– Uncomment everything under [modules] except RemoteConfiguration.dll and CheckWMI.dll
– Uncomment allowed_host under settings and add the ipaddress of the nagios-server.
– Uncomment the port# under [NSClient] section

If you have firewall running, open the port used by NSClient ++. Default ports used are 12489, 5666 and 5667.

Configuration on Nagios Monitorng Server.

1. Verify check_nt command and windows-server template
–  Verify that the check_nt is enabled under /usr/local/nagios/etc/objects/commands.cfg
# ‘check_nt’ command definition
define command{
command_name check_nt
command_line $USER1$/check_nt -H $HOSTADDRESS$ -p 12489 -v $ARG1$ $ARG2$
}

– Verify that the windows-server template is enabled under
/usr/local/nagios/etc/objects/templates.cfg
# Windows host definition template – This is NOT a real host, just a template!
define host{
name windows-server ; The name of this host template
use generic-host ; Inherit default values from the generic-host template
check_period 24×7 ; By default, Windows servers are monitored round the clock
check_interval 5 ; Actively check the server every 5 minutes
retry_interval 1 ; Schedule host check retries at 1 minute intervals
max_check_attempts 10 ; Check each server 10 times (max)
check_command check-host-alive ; Default command to check if servers are “alive”
notification_period 24×7 ; Send notification out at any time – day or night
notification_interval 30 ; Resend notifications every 30 minutes
notification_options d,r ; Only send notifications for specific host states
contact_groups admins ; Notifications get sent to the admins by default
hostgroups windows-servers ; Host groups that Windows servers should be a member of
register 0 ; DONT REGISTER THIS – ITS JUST A TEMPLATE
}

2. Uncomment windows.cfg in /usr/local/nagios/etc/nagios.cfg
# Definitions for monitoring a Windows machine
cfg_file=/usr/local/nagios/etc/objects/windows.cfg

3. Modify /usr/local/nagios/etc/objects/windows.cfg
By default a sample host definition for a windows server is given under windows.cfg, modify this to reflect
the appropriate windows server that needs to be monitored through nagios.
# Define a host for the Windows machine we’ll be monitoring
# Change the host_name, alias, and address to fit your situation
define host{
use windows-server ; Inherit default values from a template
host_name remote-windows-host ; The name we’re giving to this host
alias Remote Windows Host ; A longer name associated with the host
address 192.168.1.4 ; IP address of the remote windows host
}

4. Define windows services that should be monitored.
Following are the default windows services that are already enabled in the sample windows.cfg. Make sure
to update the host_name on these services to reflect the host_name defined in the above step.
define service{
use generic-service
host_name remote-windows-host
service_description NSClient++ Version
service_description NSClient++ Version
check_command check_nt!CLIENTVERSION
}
define service{
use generic-service
host_name remote-windows-host
service_description Uptime
check_command check_nt!UPTIME
}
define service{
use generic-service
host_name remote-windows-host
service_description CPU Load
check_command check_nt!CPULOAD!-l 5,80,90
}
define service{
use generic-service
host_name remote-windows-host
service_description Memory Usage
check_command check_nt!MEMUSE!-w 80 -c 90
}
define service{
use generic-service
host_name remote-windows-host
service_description C:\ Drive Space
check_command check_nt!USEDDISKSPACE!-l c -w 80 -c 90
}
define service{
use generic-service
host_name remote-windows-host
service_description W3SVC
check_command check_nt!SERVICESTATE!-d SHOWALL -l W3SVC
}
define service{
use generic-service
host_name remote-windows-host
service_description Explorer
check_command check_nt!PROCSTATE!-d SHOWALL -l Explorer.exe
}

5. Enable Password Protection
If you specified a password in the NSC.ini file of the NSClient++ configuration file on the windows
machine, you’ll need to modify the check_nt command definition to include the pssword.

Modify the /usr/local/nagios/etc/commands.cfg file and add password as shown below.
define command{
command_name check_nt
command_line $USER1$/check_ntHOSTADDRESS$ -p 12489 -s My2Secure$Password -v $ARG1$ $ARG2$
}

6. Verify Configuration and Restart Nagios.

Verify the nagios configuration files as shown below.
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

7. Restart nagios

In the next article I will cover How to install Nagios server?

Jmeter limitations with Monitoring server on load and how to overcome?

JMeter has option to add monitor test plan to monitor application servers. But it only works with Tomcat 5 status servlet. But any servlet container that supports JMX (Java Management Extension) can port the status servlet to provide the same information.

Also if user want want to use the monitor with other servlet or EJB containers, Tomcat’s status servlet will work with other containers for the memory statistics without any modifications. To get thread information, MBeanServer will require change in lookup to retrieve the correct MBeans.

But still it is not possible of Windows server IIS.

One of the way to overcome this situation is using Nagios. This can also be used in conjuction with JMeter. That is load is generated using JMeter and Nagios is used to monitor application server performance on load.

Now, the question is what is nagios? How it works ….

Nagios is System and network monitoring tool. It watches hosts and services that is specified and alerts when things go bad crosses threshold value ( this threshold value can be configured) and when they get better.

It works on Linux and linux like system.. BUT it can also MONITOR WINDOWS SERVER. This is the most important aspect of it.

The only requirement of running Nagios is Linux machine or its variant and c compiler.

On the next article I will explain the details of how it can be used to configure and monitor windows machine.