ROBOTICS: pulse width modulation..

01 - What is PWM?

Submitted by Webbot on November 30, 2008 - 1:08pm.

PWM stands for Pulse Width Modulation. This means that we can generate a pulse whose width (ie duration) can be altered.

The digital world

Since microcontrollers live in a digital world then their output pins can be either low (0v) or high (5v). However: the rest of the world tends not to speak such an open-or-shut case ie the rest of the world tends to be analogue. Rather than just being on or off: motors tend to need speed control, lighting may need to be dimmed, servos need to move to a particular position, buzzers need a sound frequency etc.

AVR microcontrollers have Analogue To Digitals Convertors (ADC) to convert a voltage from the analogue world to a number but do not have Digital to Analogue Convertors (DAC) to convert digital numbers back into variable voltages.

PWM is the closest solution.

By turning an output pin repeatedly high and low very quickly then the result is an average of the amount of time the output is high. If it is always low the result is 0v, always high then the result is 5v, if half-and-half then the result is 2.5v.

Why does this work? Well most real world devices have some kind of latency (ie they don't do what you ask immediately). This could be caused by a mixture of momentum, inductance, capacitance, friction (amongst others).

For example: if you connect a motor to a battery then it will, eventually, rotate at full speed. Disconnect the battery and the motor will take a little while to slow down until it stops. Equally if the motor is only connected to the battery for a very short time before being disconnected then it wont have enough time to get up to full speed. So if we repeatedly connected and disconnected the battery then the motor would start turning, then slow down, start turning, slow down etc. Obviously if we only did this a few times a second then it would be kind of jerky - but if we did it fast enough then we could control the speed of the motor dependent on the percentage of time the battery was connected versus not connected.

Similarly - if we wanted to dim lights or LEDs then they take a little while to get up to 'full glow' and, once disconnected from the power, the glow fades away. So we could create a dimmer by varying the amount of time on or off.

Servos are another example. They tend to expect a pulse every 20ms - depending on the width of the pusle they move to a given location.

How do we create a PWM signal

Before we discuss the intricacies of how we program a microcontroller then let's consider some basics to get a general idea of what we want to achieve.

Microcontrollers are very good with whole (integer) numbers. So assuming we have two numbers: one called BOTTOM and a higher number called TOP. By making the microcontroller start at BOTTOM, and then count upwards until it reaches TOP, and then repeat the process - if we were to then plot the resulting numbers on a graph then we end up with what is called a Sawtooth waveform that looks like this.

Of course you can never output this signal from your controller as it can only cope with on or off and not all these numbers - it just shows how the number starts at BOTTOM, counts up to TOP, and then starts all over again.

So the next step is to add a 'comparator' which is used to decide whether our output pin should be high or low. This comparator is yet another number which is somewhere in the range between BOTTOM and TOP. If the current Sawtooth number is less than the comparator value then the output will be low, otherwise the output will be high.

If the value of the comparator was equal to BOTTOM then the Sawtooth value could never be lower than bottom so our output pin would always be high. Equally if the comparator value was equal to TOP then the output pin would always be low. However: if the comparator value was the mid-value between BOTTOM and TOP then the output pin would spend 50% of its time being low and the other 50% being high. By varying the comparator value we can change the 'high' time anywhere between 0% and 100% of the time.

Looking back at the previous diagram we can see that the sawtooth waveform is 6 units high and repeats every 3 units across. So if we were to set our comparator to be 2 units above the BOTTOM value then what would happen?
The sawtooth waveform would spend 1/3 of the time below this value and the remaining 2/3 of the time above value. So our digital output pin would be a square wave that is low for 1/3 of the time and high for 2/3 of the time.



Frequency

In the above example the sawtooth waveform repeated every 3 units. Assuming that each unit was 1ms then our waveform repeats every 3ms.

Given that Frequency = 1 / Time

then the signal frequency is 1/0.003, or 333.33 Hz. Note that with PWM this frequency remains constant - we just use the comparator value to adjust the duty cycle.

Duty Cycle

The percentage of time that our output pin is high is called the duty time. In the example above it is high for 2/3 of the time ie a 66.66% duty cycle.


stay connected for more on PWM.. and avr microcontrolers

Read Users' Comments (0)

keep swine flu away: tulsi the indian herb

Tulsi can help keep swine flu away: Ayurvedic experts



Lucknow, May 27: Wonder herb Tulsi can not only keep the dreaded swine flu at bay but also help in fast recovery of an afflicted person, Ayurvedic practitioners claim.


"The anti-flu property of Tulsi has been discovered by medical experts across the world quite recently. Tulsi improves the body's overall defence mechanism including its ability to fight viral diseases. It was successfully used in combating Japanese Encephalitis and the same theory applies to swine flu," Dr U K Tiwari, a herbal medicine practitioner says.

Apart from acting as a preventive medicine in case of swine flu, Tulsi can help the patient recover faster.

"Even when a person has already contracted swine flu, Tulsi can help in speeding up the recovery process and also help in strengthening the immune system of the body," he claims.

Dr Bhupesh Patel, a lecturer at Gujarat Ayurved University, Jamnagar is also of the view that Tulsi can play an important role in controlling swine flu.

"Tulsi can control swine flu and it should be taken in fresh form. Juice or paste of at least 20-25 medium sized leaves should be consumed twice a day on an empty stomach."

This increases the resistance of the body and, thereby, reduces the chances of inviting swine flu," believes Patel.

source : trusted


Read Users' Comments (0)

Bhuvan: India's answer to Google Earth

Bangalore: The Indian Space Research Organization (ISRO) has launched Bhuvan, a mapping application website like Google Earth, which promises to give better 3D satellite images of India and provides India specific features. The scientific community of the country also remembered the father of the Indian space program, Dr. Vikram Sarabhai on his birth anniversary.



Bhuvan, which means earth in Sanskrit, allows users to see any part of the subcontinent barring sensitive locations such as military and nuclear installations. The 3D mapping tool uses images taken a year ago by ISRO's seven remote sensing satellites, including Cartosat-1 and Cartosat-2. The satellites can even capture the images of objects as small as a car on a road.

Bhuvan displays satellite images of varying resolution of India's surface, allowing users to visually see things like cities and important places of interest looking perpendicularly down or at an oblique angle, with different perspectives and can navigate through 3D viewing environment. The degree of resolution showcased is based on the points of interest and popularity, but most of the Indian terrain is covered up to at least six meters of resolution with the least spatial resolution being 55 meters from Advanced Wide Field Sensor (AWifs). Bhuvan maps up to 10 meters compared to 200 meters of Google and 50 meters of Wikipedia.

Hyderabad based National Remote Sensing Agency (NRSA), which is a part of ISRO, had a lead role in designing and developing 'Bhuvan'.

"We were extremely enthusiastic and right from the word go our focus was that it should be useful to users in India," said V Jayaraman, Director, NRSA.

The features include: Access, explore and visualize 2D and 3D image data along with rich thematic information on Soil, wasteland, water resources, superpose administrative boundaries of choice on images as required, visualization of AWS ( Automatic Weather Stations) data/information in a graphic view and use tabular weather data of user choice, Heads-Up Display ( HUD) naviation controls ( Tilt slider, north indicator, opacity, compass ring, zoom slider), navigation using the 3D view pop-up menu (Fly-in, Fly out, jump in, jump around, view point), drawing 2D objects (Text labels, polylines, polygons, rectangles, 2D arrows, circles, ellipse), drawing 3D Objects (placing of expressive 3D models, 3D polygons, boxes), snapshot creation (copies the 3D view to a floating window and allows to save to a external file)

Advanced functionalities which will be provided in the future are urban design tools, contour map and terrain profile.

Madhavan Nair, ISRO Chairman informed that the space agency had started the preparations for a mission to Mars within the next six years. It was looking at launch opportunities between 2013 and 2015.


Read Users' Comments (1)comments

The new cms

hey all, there is a update on the content management system i've been developing.... the start page of the system is now available now.. the skeleton of the system has been developed and now the work is on the core coding... 

littile info on the system... for those ppl who do not know the cms i ve been developing .. that platform is php... now the skeleton is up ,... will be involving some of my pals arround for development of algorithm and codes... next post will be having the basic code snipset.
stay connected...
and lill help from you guys will be very appriciated...
i havent figured out the name of the system... since the project is open source i will like the name for this project also be suggested by the general public... 

Read Users' Comments (0)

sorry folks

sorry folks .. m kinda bussy with some content management project so , the blog will be out of posts for some time ... plz stay connected ... as m on a big thing will post the updates soon...

stay connected

Read Users' Comments (0)

rapidshare time limit hack

well here is a trick that will guide you to bypass the download restriction putted by rapidshare on the free user downloads.

==============
Directions
Rapidshare traces the users IP address to limit each user to a certain amount of downloading per day. To get around this, you need to show the rapidshare server, a different IP address. You can do this one of multiple ways.

Requesting a new IP address from your ISP server.

Here's how to do it in windows:
1. Click Start
2. Click run
3. In the run box type cmd.exe and click OK
4. When the command prompt opens type the following. ENTER after each new line.


ipconfig /flushdns
ipconfig /release
ipconfig /renew
exit


5. Erase your cookies in whatever browser you are using.
6. Try the rapidshare download again.
Frequently you will be assigned a new IP address when this happens. Sometime you will, sometimes you will not. If you are on a fixed IP address, this method will not work. To be honest, I do not know how to do this in linux/unix/etc. If this works for you, you may want to save the above commands into a batch file, and just run it when you need it.

enjoy hacking

Read Users' Comments (0)

delete an undeletable file....

there are insatnces of computing where you come across some file or folder which some how or by some reason is un-deletable, even you try hard you are unable to delete it,

well here is the trick by which you will be able to delete that file and be happy.......

Delete An "undeletable" File

Open a Command Prompt window and leave it open.
Close all open programs.
Click Start, Run and enter TASKMGR.EXE
Go to the Processes tab and End Process on Explorer.exe.
Leave Task Manager open.
Go back to the Command Prompt window and change to the directory the AVI (or other undeletable file) is located in.
At the command prompt type DEL where is the file you wish to delete.
Go back to Task Manager, click File, New Task and enter EXPLORER.EXE to restart the GUI shell.
Close Task Manager.


Or you can try this

Open Notepad.exe

Click File>Save As..>

locate the folder where ur undeletable file is

Choose 'All files' from the file type box

click once on the file u wanna delete so its name appears in the 'filename' box

put a " at the start and end of the filename
(the filename should have the extension of the undeletable file so it will overwrite it)

click save,

It should ask u to overwrite the existing file, choose yes and u can delete it as normal


Here's a manual way of doing it. I'll take this off once you put into your first post zain.

1. Start
2. Run
3. Type: command
4. To move into a directory type: cd c:\*** (The stars stand for your folder)
5. If you cannot access the folder because it has spaces for example Program Files or Kazaa Lite folder you have to do the following. instead of typing in the full folder name only take the first 6 letters then put a ~ and then 1 without spaces. Example: cd c:\progra~1\kazaal~1
6. Once your in the folder the non-deletable file it in type in dir - a list will come up with everything inside.
7. Now to delete the file type in del ***.bmp, txt, jpg, avi, etc... And if the file name has spaces you would use the special 1st 6 letters followed by a ~ and a 1 rule. Example: if your file name was bad file.bmp you would type once in the specific folder thorugh command, del badfil~1.bmp and your file should be gone. Make sure to type in the correct extension.

Read Users' Comments (0)

unistalling linux

well i got a friend thru my blog who have a keen interest in linux so this post is for her,

well there is allways a point we want to uninstall a linux os , and this is when this tutorial commes handy,
this tutorial will tell you how to safely remove the linux from your system
===
First of all you need to know where your Linux OS is installed to. that is what drive it is currently living on. Bear in mind that Linux formats the drive as HFS rather than Fat/Fat32 or NTFS. ( These are the file systems used by various Operating Systems).

So HFS Partitions are not seen by windows, so its hidden.

To remove the partitions of Linux in WindowsXP go to your 'Control panel' > Admistrative Tools > Computer Managment

Open 'Disk Management' and you will see your Linux drives recognised as 'Unknown Partition' plus the status of the drive. Bearing in mind you know what partition and disk you installed to it will be easier to recognise as the drive/partition where you had installed it to.

Once you have identifed the drives, 'right-Click' on the drive/partiton and select 'Delete Logical Drive'

Once you have followed this through, you will now have free space.

This next part is very important. Once you have formatted the drive, re format it as your required file system type. either Fat32 or NTFS. Now the important part is coming up !

Fixing your Master Boot Record to make Windows Bootable again.

Have a Windows Boot disk with all the basic DOS Commands loaded on to the disk. A standard Windows 98/Me Boot Disk will work too.

Type in the DOS command :

e.g, from your C:\

fdisk /mbr

Or use your Windows XP run the recovery console, pick which xp install you would like to boot in to (usually you will pick #1)

then type: fixmbr. Answer Y to the dialoge.

Your master boot record will now be restored and Windows XP will be bootable once again. Your System will be restored with your original boot loader that you got with Windows XP or vista

Read Users' Comments (0)

10 reasons your system crash you must know

it has been posted here before but i think i needs another mention over here so just download the pdf format e-book and enjoy the offline reading too


1 Hardware conflict

The number one reason why Windows crashes is hardware conflict. Each hardware device communicates to other devices through an interrupt request channel (IRQ). These are supposed to be unique for each device.

For example, a printer usually connects internally on IRQ 7. The keyboard usually uses IRQ 1 and the floppy disk drive IRQ 6. Each device will try to hog a single IRQ for itself.

If there are a lot of devices, or if they are not installed properly, two of them may end up sharing the same IRQ number. When the user tries to use both devices at the same time, a crash can happen. The way to check if your computer has a hardware conflict is through the following route:

* Start-Settings-Control Panel-System-Device Manager.

Often if a device has a problem a yellow '!' appears next to its description in the Device Manager. Highlight Computer (in the Device Manager) and press Properties to see the IRQ numbers used by your computer. If the IRQ number appears twice, two devices may be using it.

Sometimes a device might share an IRQ with something described as 'IRQ holder for PCI steering'. This can be ignored. The best way to fix this problem is to remove the problem device and reinstall it.

Sometimes you may have to find more recent drivers on the internet to make the device function properly. A good resource is www.driverguide.com. If the device is a soundcard, or a modem, it can often be fixed by moving it to a different slot on the motherboard (be careful about opening your computer, as you may void the warranty).

When working inside a computer you should switch it off, unplug the mains lead and touch an unpainted metal surface to discharge any static electricity.

To be fair to Mcft, the problem with IRQ numbers is not of its making. It is a legacy problem going back to the first PC designs using the IBM 8086 chip. Initially there were only eight IRQs. Today there are 16 IRQs in a PC. It is easy to run out of them. There are plans to increase the number of IRQs in future designs.

2 Bad Ram

Ram (random-access memory) problems might bring on the blue screen of death with a message saying Fatal Exception Error. A fatal error indicates a serious hardware problem. Sometimes it may mean a part is damaged and will need replacing.

But a fatal error caused by Ram might be caused by a mismatch of chips. For example, mixing 70-nanosecond (70ns) Ram with 60ns Ram will usually force the computer to run all the Ram at the slower speed. This will often crash the machine if the Ram is overworked.

One way around this problem is to enter the BIOS settings and increase the wait state of the Ram. This can make it more stable. Another way to troubleshoot a suspected Ram problem is to rearrange the Ram chips on the motherboard, or take some of them out. Then try to repeat the circumstances that caused the crash. When handling Ram try not to touch the gold connections, as they can be easily damaged.

Parity error messages also refer to Ram. Modern Ram chips are either parity (ECC) or non parity (non-ECC). It is best not to mix the two types, as this can be a cause of trouble.

EMM386 error messages refer to memory problems but may not be connected to bad Ram. This may be due to free memory problems often linked to old Dos-based programmes.

3 BIOS settings

Every motherboard is supplied with a range of chipset settings that are decided in the factory. A common way to access these settings is to press the F2 or delete button during the first few seconds of a boot-up.

Once inside the BIOS, great care should be taken. It is a good idea to write down on a piece of paper all the settings that appear on the screen. That way, if you change something and the computer becomes more unstable, you will know what settings to revert to.

A common BIOS error concerns the CAS latency. This refers to the Ram. Older EDO (extended data out) Ram has a CAS latency of 3. Newer SDRam has a CAS latency of 2. Setting the wrong figure can cause the Ram to lock up and freeze the computer's display.

Mcft Windows is better at allocating IRQ numbers than any BIOS. If possible set the IRQ numbers to Auto in the BIOS. This will allow Windows to allocate the IRQ numbers (make sure the BIOS setting for Plug and Play OS is switched to 'yes' to allow Windows to do this.).

4 Hard disk drives

After a few weeks, the information on a hard disk drive starts to become piecemeal or fragmented. It is a good idea to defragment the hard disk every week or so, to prevent the disk from causing a screen freeze. Go to

* Start-Programs-Accessories-System Tools-Disk Defragmenter

This will start the procedure. You will be unable to write data to the hard drive (to save it) while the disk is defragmenting, so it is a good idea to schedule the procedure for a period of inactivity using the Task Scheduler.

The Task Scheduler should be one of the small icons on the bottom right of the Windows opening page (the desktop).

Some lockups and screen freezes caused by hard disk problems can be solved by reducing the read-ahead optimisation. This can be adjusted by going to

* Start-Settings-Control Panel-System Icon-Performance-File System-Hard Disk.

Hard disks will slow down and crash if they are too full. Do some housekeeping on your hard drive every few months and free some space on it. Open the Windows folder on the C drive and find the Temporary Internet Files folder. Deleting the contents (not the folder) can free a lot of space.

Empty the Recycle Bin every week to free more space. Hard disk drives should be scanned every week for errors or bad sectors. Go to

* Start-Programs-Accessories-System Tools-ScanDisk

Otherwise assign the Task Scheduler to perform this operation at night when the computer is not in use.

5 Fatal OE exceptions and VXD errors

Fatal OE exception errors and VXD errors are often caused by video card problems.

These can often be resolved easily by reducing the resolution of the video display. Go to

* Start-Settings-Control Panel-Display-Settings

Here you should slide the screen area bar to the left. Take a look at the colour settings on the left of that window. For most desktops, high colour 16-bit depth is adequate.

If the screen freezes or you experience system lockups it might be due to the video card. Make sure it does not have a hardware conflict. Go to

* Start-Settings-Control Panel-System-Device Manager

Here, select the + beside Display Adapter. A line of text describing your video card should appear. Select it (make it blue) and press properties. Then select Resources and select each line in the window. Look for a message that says No Conflicts.

If you have video card hardware conflict, you will see it here. Be careful at this point and make a note of everything you do in case you make things worse.

The way to resolve a hardware conflict is to uncheck the Use Automatic Settings box and hit the Change Settings button. You are searching for a setting that will display a No Conflicts message.

Another useful way to resolve video problems is to go to

* Start-Settings-Control Panel-System-Performance-Graphics

Here you should move the Hardware Acceleration slider to the left. As ever, the most common cause of problems relating to graphics cards is old or faulty drivers (a driver is a small piece of software used by a computer to communicate with a device).

Look up your video card's manufacturer on the internet and search for the most recent drivers for it.

6 Viruses

Often the first sign of a virus infection is instability. Some viruses erase the boot sector of a hard drive, making it impossible to start. This is why it is a good idea to create a Windows start-up disk. Go to

* Start-Settings-Control Panel-Add/Remove Programs

Here, look for the Start Up Disk tab. Virus protection requires constant vigilance.

A virus scanner requires a list of virus signatures in order to be able to identify viruses. These signatures are stored in a DAT file. DAT files should be updated weekly from the website of your antivirus software manufacturer.

An excellent antivirus programme is McAfee VirusScan by Network Associates ( www.nai.com). Another is Norton AntiVirus 2000, made by Symantec ( www.symantec.com).

7 Printers

The action of sending a document to print creates a bigger file, often called a postscript file.

Printers have only a small amount of memory, called a buffer. This can be easily overloaded. Printing a document also uses a considerable amount of CPU power. This will also slow down the computer's performance.

If the printer is trying to print unusual characters, these might not be recognised, and can crash the computer. Sometimes printers will not recover from a crash because of confusion in the buffer. A good way to clear the buffer is to unplug the printer for ten seconds. Booting up from a powerless state, also called a cold boot, will restore the printer's default settings and you may be able to carry on.

8 Software

A common cause of computer crash is faulty or badly-installed software. Often the problem can be cured by uninstalling the software and then reinstalling it. Use Norton Uninstall or Uninstall Shield to remove an application from your system properly. This will also remove references to the programme in the System Registry and leaves the way clear for a completely fresh copy.

The System Registry can be corrupted by old references to obsolete software that you thought was uninstalled. Use Reg Cleaner by Jouni Vuorio to clean up the System Registry and remove obsolete entries. It works on Windows 95, Windows 98, Windows 98 SE (Second Edition), Windows Millennium Edition (ME), NT4 and Windows 2000.

Read the instructions and use it carefully so you don't do permanent damage to the Registry. If the Registry is damaged you will have to reinstall your operating system. Reg Cleaner can be obtained from www.jv16.org

Often a Windows problem can be resolved by entering Safe Mode. This can be done during start-up. When you see the message "Starting Windows" press F4. This should take you into Safe Mode.

Safe Mode loads a minimum of drivers. It allows you to find and fix problems that prevent Windows from loading properly.

Sometimes installing Windows is difficult because of unsuitable BIOS settings. If you keep getting SUWIN error messages (Windows setup) during the Windows installation, then try entering the BIOS and disabling the CPU internal cache. Try to disable the Level 2 (L2) cache if that doesn't work.

Remember to restore all the BIOS settings back to their former settings following installation.

9 Overheating

Central processing units (CPUs) are usually equipped with fans to keep them cool. If the fan fails or if the CPU gets old it may start to overheat and generate a particular kind of error called a kernel error. This is a common problem in chips that have been overclocked to operate at higher speeds than they are supposed to.

One remedy is to get a bigger better fan and install it on top of the CPU. Specialist cooling fans/heatsinks are available from www.computernerd.com or www.coolit.com

CPU problems can often be fixed by disabling the CPU internal cache in the BIOS. This will make the machine run more slowly, but it should also be more stable.

10 Power supply problems

With all the new construction going on around the country the steady supply of electricity has become disrupted. A power surge or spike can crash a computer as easily as a power cut.

If this has become a nuisance for you then consider buying a uninterrupted power supply (UPS). This will give you a clean power supply when there is electricity, and it will give you a few minutes to perform a controlled shutdown in case of a power cut.

It is a good investment if your data are critical, because a power cut will cause any unsaved data to be lost.

Read Users' Comments (0)

How linux boots.

How Linux boots

As it turns out, there isn't much to the boot process:

   1. A boot loader finds the kernel image on the disk, loads it into memory, and starts it.
   2. The kernel initializes the devices and its drivers.
   3. The kernel mounts the root filesystem.
   4. The kernel starts a program called init.
   5. init sets the rest of the processes in motion.
   6. The last processes that init starts as part of the boot sequence allow you to log in.

Identifying each stage of the boot process is invaluable in fixing boot problems and understanding the system as a whole. To start, zero in on the boot loader, which is the initial screen or prompt you get after the computer does its power-on self-test, asking which operating system to run. After you make a choice, the boot loader runs the Linux kernel, handing control of the system to the kernel.

There is a detailed discussion of the kernel elsewhere in this book from which this article is excerpted. This article covers the kernel initialization stage, the stage when the kernel prints a bunch of messages about the hardware present on the system. The kernel starts init just after it displays a message proclaiming that the kernel has mounted the root filesystem:

VFS: Mounted root (ext2 filesystem) readonly.

Soon after, you will see a message about init starting, followed by system service startup messages, and finally you get a login prompt of some sort.

NOTE On Red Hat Linux, the init note is especially obvious, because it "welcomes" you to "Red Hat Linux." All messages thereafter show success or failure in brackets at the right-hand side of the screen.

Most of this chapter deals with init, because it is the part of the boot sequence where you have the most control.
init

There is nothing special about init. It is a program just like any other on the Linux system, and you'll find it in /sbin along with other system binaries. The main purpose of init is to start and stop other programs in a particular sequence. All you have to know is how this sequence works.

There are a few different variations, but most Linux distributions use the System V style discussed here. Some distributions use a simpler version that resembles the BSD init, but you are unlikely to encounter this.

Runlevels

At any given time on a Linux system, a certain base set of processes is running. This state of the machine is called its runlevel, and it is denoted with a number from 0 through 6. The system spends most of its time in a single runlevel. However, when you shut the machine down, init switches to a different runlevel in order to terminate the system services in an orderly fashion and to tell the kernel to stop. Yet another runlevel is for single-user mode, discussed later.

The easiest way to get a handle on runlevels is to examine the init configuration file, /etc/inittab. Look for a line like the following:

id:5:initdefault:

This line means that the default runlevel on the system is 5. All lines in the inittab file take this form, with four fields separated by colons occurring in the following order:
# A unique identifier (a short string, such as id in the preceding example)
# The applicable runlevel number(s)
# The action that init should take (in the preceding example, the action is to set the default runlevel to 5)
# A command to execute (optional)

There is no command to execute in the preceding initdefault example because a command doesn't make sense in the context of setting the default runlevel. Look a little further down in inittab, until you see a line like this:

l5:5:wait:/etc/rc.d/rc 5

This line triggers most of the system configuration and services through the rc*.d and init.d directories. You can see that init is set to execute a command called /etc/rc.d/rc 5 when in runlevel 5. The wait action tells when and how init runs the command: run rc 5 once when entering runlevel 5, and then wait for this command to finish before doing anything else.

There are several different actions in addition to initdefault and wait, especially pertaining to power management, and the inittab(5) manual page tells you all about them. The ones that you're most likely to encounter are explained in the following sections.

respawn

The respawn action causes init to run the command that follows, and if the command finishes executing, to run it again. You're likely to see something similar to this line in your inittab file:

1:2345:respawn:/sbin/mingetty tty1

The getty programs provide login prompts. The preceding line is for the first virtual console (/dev/tty1), the one you see when you press ALT-F1 or CONTROL-ALT-F1. The respawn action brings the login prompt back after you log out.

ctrlaltdel

The ctrlaltdel action controls what the system does when you press CONTROL-ALT-DELETE on a virtual console. On most systems, this is some sort of reboot command using the shutdown command.

sysinit

The sysinit action is the very first thing that init should run when it starts up, before entering any runlevels.

How processes in runlevels start

You are now ready to learn how init starts the system services, just before it lets you log in. Recall this inittab line from earlier:

l5:5:wait:/etc/rc.d/rc 5

This small line triggers many other programs. rc stands for run commands, and you will hear people refer to the commands as scripts, programs, or services. So, where are these commands, anyway?

For runlevel 5, in this example, the commands are probably either in /etc/rc.d/rc5.d or /etc/rc5.d. Runlevel 1 uses rc1.d, runlevel 2 uses rc2.d, and so on. You might find the following items in the rc5.d directory:

S10sysklogd       S20ppp          S99gpm
S12kerneld        S25netstd_nfs   S99httpd
S15netstd_init    S30netstd_misc  S99rmnologin
S18netbase        S45pcmcia       S99sshd
S20acct           S89atd
S20logoutd        S89cron 

The rc 5 command starts programs in this runlevel directory by running the following commands:

S10sysklogd start
S12kerneld start
S15netstd_init start
S18netbase start
...
S99sshd start 

Notice the start argument in each command. The S in a command name means that the command should run in start mode, and the number (00 through 99) determines where in the sequence rc starts the command.

The rc*.d commands are usually shell scripts that start programs in /sbin or /usr/sbin. Normally, you can figure out what one of the commands actually does by looking at the script with less or another pager program.

You can start one of these services by hand. For example, if you want to start the httpd Web server program manually, run S99httpd start. Similarly, if you ever need to kill one of the services when the machine is on, you can run the command in the rc*.d directory with the stop argument (S99httpd stop, for instance).

Some rc*.d directories contain commands that start with K (for "kill," or stop mode). In this case, rc runs the command with the stop argument instead of start. You are most likely to encounter K commands in runlevels that shut the system down.

Adding and removing services

If you want to add, delete, or modify services in the rc*.d directories, you need to take a closer look at the files inside. A long listing reveals a structure like this:

lrwxrwxrwx . . . S10sysklogd -> ../init.d/sysklogd
lrwxrwxrwx . . . S12kerneld -> ../init.d/kerneld
lrwxrwxrwx . . . S15netstd_init -> ../init.d/netstd_init
lrwxrwxrwx . . . S18netbase -> ../init.d/netbase
... 

The commands in an rc*.d directory are actually symbolic links to files in an init.d directory, usually in /etc or /etc/rc.d. Linux distributions contain these links so that they can use the same startup scripts for all runlevels. This convention is by no means a requirement, but it often makes organization a little easier.

To prevent one of the commands in the init.d directory from running in a particular runlevel, you might think of removing the symbolic link in the appropriate rc*.d directory. This does work, but if you make a mistake and ever need to put the link back in place, you might have trouble remembering the exact name of the link. Therefore, you shouldn't remove links in the rc*.d directories, but rather, add an underscore (_) to the beginning of the link name like this:

mv S99httpd _S99httpd

At boot time, rc ignores _S99httpd because it doesn't start with S or K. Furthermore, the original name is still obvious, and you have quick access to the command if you're in a pinch and need to start it by hand.

To add a service, you must create a script like the others in the init.d directory and then make a symbolic link in the correct rc*.d directory. The easiest way to write a script is to examine the scripts already in init.d, make a copy of one that you understand, and modify the copy.

When adding a service, make sure that you choose an appropriate place in the boot sequence to start the service. If the service starts too soon, it may not work, due to a dependency on some other service. For non-essential services, most systems administrators prefer numbers in the 90s, after most of the services that came with the system.

Linux distributions usually come with a command to enable and disable services in the rc*.d directories. For example, in Debian, the command is update-rc.d, and in Red Hat Linux, the command is chkconfig. Graphical user interfaces are also available. Using these programs helps keep the startup directories consistent and helps with upgrades.

HINT: One of the most common Linux installation problems is an improperly configured XFree86 server that flicks on and off, making the system unusable on console. To stop this behavior, boot into single-user mode and alter your runlevel or runlevel services. Look for something containing xdm, gdm, or kdm in your rc*.d directories, or your /etc/inittab.

Controlling init

Occasionally, you need to give init a little kick to tell it to switch runlevels, to re-read the inittab file, or just to shut down the system. Because init is always the first process on a system, its process ID is always 1.

You can control init with telinit. For example, if you want to switch to runlevel 3, use this command:

telinit 3

When switching runlevels, init tries to kill off any processes that aren't in the inittab file for the new runlevel. Therefore, you should be careful about changing runlevels.

When you need to add or remove respawning jobs or make any other change to the inittab file, you must tell init about the change and cause it to re-read the file. Some people use kill -HUP 1 to tell init to do this. This traditional method works on most versions of Unix, as long as you type it correctly. However, you can also run this telinit command:

telinit q

You can also use telinit s to switch to single-user mode.

Shutting down

init also controls how the system shuts down and reboots. The proper way to shut down a Linux machine is to use the shutdown command.

There are two basic ways to use shutdown. If you halt the system, it shuts the machine down and keeps it down. To make the machine halt immediately, use this command:

shutdown -h now

On most modern machines with reasonably recent versions of Linux, a halt cuts the power to the machine. You can also reboot the machine. For a reboot, use -r instead of -h.

The shutdown process takes several seconds. You should never reset or power off a machine during this stage.

In the preceding example, now is the time to shut down. This argument is mandatory, but there are many ways of specifying it. If you want the machine to go down sometime in the future, one way is to use +n, where n is the number of minutes shutdown should wait before doing its work. For other options, look at the shutdown(8) manual page.

To make the system reboot in 10 minutes, run this command:

shutdown -r +10

On Linux, shutdown notifies anyone logged on that the machine is going down, but it does little real work. If you specify a time other than now, shutdown creates a file called /etc/nologin. When this file is present, the system prohibits logins by anyone except the superuser.

When system shutdown time finally arrives, shutdown tells init to switch to runlevel 0 for a halt and runlevel 6 for a reboot. When init enters runlevel 0 or 6, all of the following takes place, which you can verify by looking at the scripts inside rc0.d and rc6.d:

   1. init kills every process that it can (as it would when switching to any other runlevel). 

# The initial rc0.d/rc6.d commands run, locking system files into place and making other preparations for shutdown.
# The next rc0.d/rc6.d commands unmount all filesystems other than the root.
# Further rc0.d/rc6.d commands remount the root filesystem read-only.
# Still more rc0.d/rc6.d commands write all buffered data out to the filesystem with the sync program.
# The final rc0.d/rc6.d commands tell the kernel to reboot or stop with the reboot, halt, or poweroff program.

The reboot and halt programs behave differently for each runlevel, potentially causing confusion. By default, these programs call shutdown with the -r or -h options, but if the system is already at the halt or reboot runlevel, the programs tell the kernel to shut itself off immediately. If you really want to shut your machine down in a hurry (disregarding any possible damage from a disorderly shutdown), use the -f option. 
Enjoy os jagroon

Read Users' Comments (0)

after the exam is over

well guys I will be having my exams in upcoming 10 days so the blog may be short of post,

but after the xams are over I promisse of putting up downloads and a other stuffs, as i promissed in the robotics tutorial part for putting up tutorial regarding microcontroler programing stay connected.

Read Users' Comments (0)

beep code error manual

1st thing 1st , this mannual is not my creation , it was given to me by a friend of mine who got this over the internet, 
for those unfamiliar with the beep thing, here is the summary , remember the beep sound made by your desktop pc when it boots, its the sound When a computer is first turned on, or rebooted, its BIOS performs a power-on self test (POST) to test the system's hardware, checking to make sure that all of the system's hardware components are working properly. Under normal circumstances, the POST will display an error message; however, if the BIOS detects an error before it can access the video card, or if there is a problem with the video card, it will produce a series of beeps, and the pattern of the beeps indicates what kind of problem the BIOS has detected.
Because there are many brands of BIOS, there are no standard beep codes for every BIOS.


The two most-used brands are AMI (American Megatrends International) and Phoenix.

Below are listed the beep codes for AMI systems, and here are the beep codes for Phoenix systems.


AMI Beep Codes

Beep Code Meaning
1 beep DRAM refresh failure. There is a problem in the system memory or the motherboard.
2 beeps Memory parity error. The parity circuit is not working properly.
3 beeps Base 64K RAM failure. There is a problem with the first 64K of system memory.
4 beeps System timer not operational. There is problem with the timer(s) that control functions on the motherboard.
5 beeps Processor failure. The system CPU has failed.
6 beeps Gate A20/keyboard controller failure. The keyboard IC controller has failed, preventing gate A20 from switching the processor to protect mode.
7 beeps Virtual mode exception error.
8 beeps Video memory error. The BIOS cannot write to the frame buffer memory on the video card.
9 beeps ROM checksum error. The BIOS ROM chip on the motherboard is likely faulty.
10 beeps CMOS checksum error. Something on the motherboard is causing an error when trying to interact with the CMOS.
11 beeps Bad cache memory. An error in the level 2 cache memory.
1 long beep, 2 short Failure in the video system.
1 long beep, 3 short A failure has been detected in memory above 64K.
1 long beep, 8 short Display test failure.
Continuous beeping A problem with the memory or video.
BIOS Beep Codes


Phoenix Beep Codes

Phoenix uses sequences of beeps to indicate problems. The "-" between each number below indicates a pause between each beep sequence. For example, 1-2-3 indicates one beep, followed by a pause and two beeps, followed by a pause and three beeps. Phoenix version before 4.x use 3-beep codes, while Phoenix versions starting with 4.x use 4-beep codes. Click here for AMI BIOS beep codes.
4-Beep Codes
Beep Code Meaning
1-1-1-3 Faulty CPU/motherboard. Verify real mode.
1-1-2-1 Faulty CPU/motherboard.
1-1-2-3 Faulty motherboard or one of its components.
1-1-3-1 Faulty motherboard or one of its components. Initialize chipset registers with initial POST values.
1-1-3-2 Faulty motherboard or one of its components.
1-1-3-3 Faulty motherboard or one of its components. Initialize CPU registers.
1-1-3-2
1-1-3-3
1-1-3-4 Failure in the first 64K of memory.
1-1-4-1 Level 2 cache error.
1-1-4-3 I/O port error.
1-2-1-1 Power management error.
1-2-1-2
1-2-1-3 Faulty motherboard or one of its components.
1-2-2-1 Keyboard controller failure.
1-2-2-3 BIOS ROM error.
1-2-3-1 System timer error.
1-2-3-3 DMA error.
1-2-4-1 IRQ controller error.
1-3-1-1 DRAM refresh error.
1-3-1-3 A20 gate failure.
1-3-2-1 Faulty motherboard or one of its components.
1-3-3-1 Extended memory error.
1-3-3-3
1-3-4-1
1-3-4-3 Error in first 1MB of system memory.
1-4-1-3
1-4-2-4 CPU error.
1-4-3-1
2-1-4-1 BIOS ROM shadow error.
1-4-3-2
1-4-3-3 Level 2 cache error.
1-4-4-1
1-4-4-2
2-1-1-1 Faulty motherboard or one of its components.
2-1-1-3
2-1-2-1 IRQ failure.
2-1-2-3 BIOS ROM error.
2-1-2-4
2-1-3-2 I/O port failure.
2-1-3-1
2-1-3-3 Video system failure.
2-1-1-3
2-1-2-1 IRQ failure.
2-1-2-3 BIOS ROM error.
2-1-2-4 I/O port failure.
2-1-4-3
2-2-1-1 Video card failure.
2-2-1-3
2-2-2-1
2-2-2-3 Keyboard controller failure.
2-2-3-1 IRQ error.
2-2-4-1 Error in first 1MB of system memory.
2-3-1-1
2-3-3-3 Extended memory failure.
2-3-2-1 Faulty motherboard or one of its components.
2-3-2-3
2-3-3-1 Level 2 cache error.
2-3-4-1
2-3-4-3 Motherboard or video card failure.
2-3-4-1
2-3-4-3
2-4-1-1 Motherboard or video card failure.
2-4-1-3 Faulty motherboard or one of its components.
2-4-2-1 RTC error.
2-4-2-3 Keyboard controller error.
2-4-4-1 IRQ error.
3-1-1-1
3-1-1-3
3-1-2-1
3-1-2-3 I/O port error.
3-1-3-1
3-1-3-3 Faulty motherboard or one of its components.
3-1-4-1
3-2-1-1
3-2-1-2 Floppy drive or hard drive failure.
3-2-1-3 Faulty motherboard or one of its components.
3-2-2-1 Keyboard controller error.
3-2-2-3
3-2-3-1
3-2-4-1 Faulty motherboard or one of its components.
3-2-4-3 IRQ error.
3-3-1-1 RTC error.
3-3-1-3 Key lock error.
3-3-3-3 Faulty motherboard or one of its components.
3-3-3-3
3-3-4-1
3-3-4-3
3-4-1-1
3-4-1-3
3-4-2-1
3-4-2-3
3-4-3-1
3-4-4-1
3-4-4-4 Faulty motherboard or one of its components.
4-1-1-1 Floppy drive or hard drive failure.
4-2-1-1
4-2-1-3
4-2-2-1 IRQ failure.
4-2-2-3
4-2-3-1
4-2-3-3
4-2-4-1 Faulty motherboard or one of its components.
4-2-4-3 Keyboard controller error.
4-3-1-3
4-3-1-4
4-3-2-1
4-3-2-2
4-3-3-1
4-3-4-1
4-3-4-3 Faulty motherboard or one of its components.
4-3-3-2
4-3-3-4 IRQ failure.
4-3-3-3
4-3-4-2 Floppy drive or hard drive failure.
3-Beep Codes
Beep Code Meaning
1-1-2 Faulty CPU/motherboard.
1-1-3 Faulty motherboard/CMOS read-write failure.
1-1-4 Faulty BIOS/BIOS ROM checksum error.
1-2-1 System timer not operational. There is a problem with the timer(s) that control functions on the motherboard.
1-2-2
1-2-3 Faulty motherboard/DMA failure.
1-3-1 Memory refresh failure.
1-3-2
1-3-3
1-3-4 Failure in the first 64K of memory.
1-4-1 Address line failure.
1-4-2 Parity RAM failure.
1-4-3 Timer failure.
1-4-4 NMI port failure.
2-_-_ Any combination of beeps after 2 indicates a failure in the first 64K of memory.
3-1-1 Master DMA failure.
3-1-2 Slave DMA failure.
3-1-3
3-1-4 Interrupt controller failure.
3-2-4 Keyboard controller failure.
3-3-1
3-3-2 CMOS error.
3-3-4 Video card failure.
3-4-1 Video card failure.
4-2-1 Timer failure.
4-2-2 CMOS shutdown failure.
4-2-3 Gate A20 failure.
4-2-4 Unexpected interrupt in protected mode.
4-3-1 RAM test failure.
4-3-3 Timer failure.
4-3-4 Time of day clock failure.
4-4-1 Serial port failure.
4-4-2 Parallel port failure.
4-4-3 Math coprocessor.

Read Users' Comments (2)

10 reasons your pc crash.....u must know.

The 10 reasons you must know , why your system crash....
click here to download in pdf format.
plz comment if you like.

Read Users' Comments (0)

no posts till 20th

sorry guys , i m out of town so , i will be not be posting much till 20 th of this month,
check in then , lots of stuff comming their way, like the most awaited antivirus result,... other things are some tricks regarding hacking and maintaining your pc....
stay connected and secure...

Read Users' Comments (0)

Hack it

Okay guys, here's a very funny incident , i was there with my frnz and were into a group study on RDBMS 
(SQL), and suddenly one of my frnd poped a question of hacking a internet account(that is what this post is all about) and accesing their personal data's they were talking about hacking social sites like orkut, email providers like Yahoo etc, and i told them that it is possible with a very little KNOWLEDGE of SQL (THE SQL INJECTION TECHNIQUE will not be discussed here), even if they dont know SQL they can do it by a little knowledge of javascript (a word of caution java and jave scripts are different), I told them the technique and methods of hacking. The following day I was surrprized that some of my friends friend told him that his yahoo account was compermised, and now he wants me to hack back his profile.
So THIS POST IS FOR THAT FRIEND OF FRIEND.

1st of all i will like to say, Hacking (generaly) a yahoo id is not a joke, Its not like movie that a programmer sits behind some wireless network and types some quick codes and gets into database and hacks the required account. Listen up all these sites PAY THOUSANDS OF DOLLARS TO SECURE THEIR DATABASE, BUT that dosenot eradicate the possiblity of Hacking. Its you who help the hackers to get into your Pants.


Here i will explain how is it possible to get into and hack the yahoo account and will also tell you the ways orkut account is hacked, BUT REMEMBER I WILL NOT GIVE YOU THE CODES OR ANY THING WHICH WILL LET  YOU HACKING. So if you are on the wrong side or the bad guy in this place, JUST PISS OFF!!

OKAY I WILL CUT THE CRAP AND WILL COME BACK TO ORIGINAL TOPIC OF HACKING.

the four most common ways of Hacking Yahoo ids or any other are ..

1.) Social Engineering
2.) Password Crackers
3.) Using Password Stealing Trojans/Keyloggers
4.) Fake Login Pages

1.
 Social Enginnering is actually nothing but trying to know your personal and confidential details and then using it to change your password ..BUT HOW? ok there's a forgot password option with Yahoo which asks for your B'day,Country & Zip Code & later your security question..Now generally lamers who try this mode of Hacking have lots of time to waste ..They will put you into some kinda friendship/emotional trap and try to get all the above mentioned information .It may take 1-2 days or even 1-2 month ...(Really I pitty on such lamers !! ).
2.
The second kinda Hacking attempt is done with the Help of Yahoo Password Crackers...I doubt bout their efficiency bt still some of them r lucky (other way round u r stupid lol)..Password Crackers & Password Changers use Brute Force Technique with their updated wordlists...WHAT IS BRUTE FORCE ?I'll make it simple ..it's like using all possible combinations and permutations on the available data and using it as a password ..Bt again it takes a hell lot of time to crack a password ....
3.
The third and one of the most frequently used way of hacking or stealing Yahoo password is using trojans and keyloggers ..WHAT ARE TROJANS? hmmm.. I already have one ...bt still TROJANS (widely known as a Viruse) are simple programs with a server part and the client part ..you infect the victims computer with the server part and the server then connects to the client running on your system and sends passwords and vital informations..and KEYLOGGERS are programs which record your keystrokes in a log.txt file and sends that log file to the Hacker...

I have this trojan programm beleive me it worked guys i tried it on my yahoo id. Dont want to say more about it secret and some webliste regulations ... lol.. ;) So please be carefull when you are accepting any files send by some one.

Once Infected by these trojans the infected server sends your password to the Hackers Yahoo Messenger id as PM 's ...


4.
The last form of Yahoo Password stealing is done by using FAKE LOGIN PAGES ..Now wht the F**K :-) is Fake login Page ?These are cloned pages of the real Yahoo Mail Sign in pages .They look very similar to the real conterparts and really very difficult to distinguish..Once you put inyour real id and password and press the submit button you will be either redirected to some other pasge /invalid login page but the trick had already been played by this time ..your id and password would have been mailed to the Hackers mail id by using a 3rd party SMTP server and you don't even realize that you are HACKED...

So be carefull Always view the address bar ..If the address bar shows something like http://mail.yahoo.com or http://edit.login.yahoo.com then its the authentic page but if its something different then DONOT login.


WELL WELL, I think i ve posted enough info for my friends friend, and regarding hacking his account it will be upto him what he decides
Now comes the The most interesting part " hacking ORKUT"

okay, most of the orkut account gets compermised by either by social networking technique , or by scripting and by fake pages,

Hacking orkut involves a lill bit of serious skills, Like some times Javascript programing, and some tools like some special browser and an editor.
what hacker does is he sends you a javascript with a tempting message and asks you to copy the script and paste it in your browser, so what happens? so when you copy the script and do as asked you yourself helped the hacker and made the that BIGEST MISTAKE!!
The script acctually copies a file from your system and send it to the hacker, the file which is send to the hacker is a type of file which is stored in your system to help authenticate login, the hacker use a special tool to edit that file and login to your account, AND VOILA YOUR ACCOUNT IS HACKED!!

THIS INFORMATION IS FOR THOSE PEOPLE WHO THINK GOOGLE CAN BE FOOLED..... WELL TRUTH IS GOOGLE CANT BE FOOLED YOU FOOLS!!
Google uses a 4 Level Orkut login which makes it difficult to hack using brute force attack.

First Level Security-SSL or 128 bit secured connection

Second Level Google account checks for that special file in the sytem of user

Third Level Google provides a redirection to the entered User information

Fourth Level Google doesn't use conventional php/aspx/asp coding so impossible to attack using input validation attack!!!



It is not an easy task to break this security! But still some people manages to get access to other accounts. The question concerned is How they do it? Many of them just use simple tricks that befool users and then they themself leak out their password.


One more thing i cant stop you from copying this post, if you do copy just take a little pain and reffer this blog for whatsoever cauze may be.

REGARDS
Harsh
Alternatively you can download the whole article in pdf format here....download

Read Users' Comments (2)

Now it started.....Hunt for best antiviruse

okay, guys i ve arranged all the required tools to aid us to hunt down the best antiviruse in the software market ... right now.
assement will be done on following Rules...
The test will have the duration of 1 week and each antiviruse will be tested on following guidelines
  1. The system will have only one antiviruse installed at a time.
  2. None of the threats detected will be deleted until and unless there is a complete compulsion this facilitates the aviablity of the same viruse for another antiviruse and this way we can find out types of viruse detected by two antiviruses.
  3. Researchers should keep track of the viruses detected.
  4. They should note down the Name of the viruse detected and the type of viruse.
  5. what time is taken by the antiviruse to scan a system which should be noted by the Researcher.
  6. Most important is the the GUI (rate the GUI- graphics user interface ) of the system.
  7. if any ambiguity arisses they should be noted after every note.
  8. This step is The initial step whose result tells us which antiviruse is best among others for common people.
  9. Another most imp part is that the system should have acess to internet.
Stay connected ,the formate for noteing down observation will be out by tommorow.
NOTE: guys and girls who are interested in this project ... news for u
we are still looking forward for persons interested and giving in some time on this project.
contact@
wardhanster@gmail.com

Read Users' Comments (0)

AT-mega 16 programer

okay , this part is for guys nd girls who read the linefollower article and need to start off with robotics
this is the most important part of robotics , this device is used to program (burn) the program in the microcontroller..


the next tutorial will cover programing part
which is done in c
and then burning the program using winAVR

Read Users' Comments (0)

Robotics for beginers

well well, i ve been many time asked by people ... how to start robotics....
well here is a download which will help u guys thinking for starting robotics ....
this guid will guide u to the various ups and downs for starting ur robotics hobby career .
ask any questions regarding it in the comments section...

this guide explains most part of building a microcontroler based line follower robot, Tells abt basic,s of linefollwer with sample tracks and explains the preliminary details of the sensor used...
apart from that, it gives the graphical approach of microcontroler and motor interface,
the programing part is left for ur own exercise..

enjoy the download..... The line follower
Harsh
comming soon
tutorial on microcontroler interfacing
tutorial on c for controling microcontrolers
tutorial on some very simple Hacking tips...

stay connected and visit daily for updates

Read Users' Comments (1)comments

mast and fundoo thing

mast and fundoo thing mailed by my friend satish singh

Read Users' Comments (0)

windows vista broadband patch

download and run this vista patch
select apply patch
this patch runs in command prompt... and makes available the hidden bandwidth of the broadband connection used by the operating system....
its just one time installation .. u can revert back to original settings by re-running the patch and select the appropriate option....

Download here vista_tcip_patch


enjoy the download
Harsh

Read Users' Comments (0)

wallpaper collection.... Hotest girls on net....

this post was deleted by owner...
for legal prps

Read Users' Comments (0)

optimize ur dsl cable broadband users

Do comment plz
To optimize DSL-CABLE connection speed

First, u need to goto Start, then run. Type in regedit in the box. Next, goto the folder HKEY_LOCAL_MACHINE\System\CurrentControlSet\VxD\MSTCP
Now, find the string DefaultRcvWindow . Now, edit the number to 64240 then restart your computer. There you go. High speed cable modem now with out dloading a program. Original value is 373360

Read Users' Comments (0)

optimise broadband speed

First of all download this wonderful program:

h@@p://www.speedguide.net/files/TCPOptimizer.exe

Then when u start trhe program goto settings goto cable modem or dsl whatevcer u have.

Go to MaxMTU and set it to 1500 this is optimal anything above this will not work as well.

Thats bout it!! Enjoy the speed!!

Read Users' Comments (0)

bittorrent tutorial

PART 1
-------------------------------------------------------------------------------------
lets get into bittorents, cause its the easiest thing to setup.

All you do is install the bittorent client (see link above). go to
CODE
http://suprnova.org
and click on a torrent you like. I recommend that you right click and save the torrent. and then click on the saved torrent to start a download. this way if your download fails, u can resume it from the torrent you saved rather then having to go to the website. Confusing? it may be, thats why I recommend you go here for more helpful info.
CODE
http://www.dessent.net/btfaq/


The next thing that you need to note is, thats if you are using firewalls etc... you will need to free up some TCP ports. That is from 6881 to 6999. Otherwise the program will show you a yellow dot and your downloads will be slow.!!!

Now for all of you with limited connections, even though the faster you share the file you are downloading the faster you will download it at, IS TRUE. If you saturate your uplink, aside form making your internet connection crawl slower then a constipated snail, you will also slow down your bittorrent download as your pc will not be able to acknowledge the packets which you receive fast enough, since you are using all your uplink to share. In this case the *WISE* thing to do is to click on the torrent window and select Settings for [Dial up/ISDN] and move the arrow on the right of this all the way down to 3k for uploads.

I would also like to point out that with bittorrents, unlike other p2p sharing programs, you only share the file which you are downloading and NO other files on your pc. Torrents work by downloading bits of the file like a puzzle from various people. So if you have a part of the puzzle that someone else wants, you swap and so on.

(IMPORTANT often a torrent may appear to be completed on your hard disk (take up 500 megs as u expected) but it won't really be because torrents often reserve space and then this space gets filled up with the missing bits of the puzzle. PLEASE remember that a torrent is not finished downloading until it says "Download Finished"

It is also generally considered polite to leave the torrent open even after you have finished your download so that other people can download for you. If you don't wanna, then at least do it at times when there are 0 hosts and a few peers, that way you keep the torrent alive. (a host is a person or persons who have posted the torrent or left their finished file for sharing and peers are people who are downloading the file i.e. the host has the entire file and peers are ppl who don't and are downloading it)

FINALLY you should be able to find lots of handy stuff on suprnova but before you click to download a file, check that it has AT LEAST 1 SEED or if it has 0 seeds that it has quiet a few peers. The reason being that it is possible that all those people combined among themselves will not have enough data to put together the entier thing you are downloading (you will know that this is the case if after a while you still have a blue dot) Sometimes i have left these files going for a day or 2 and someone has kindly come in and shared their file again, and I managed to finish these downloads, so don't give up on these files straight away.

I know i have written a fair bit here, but you can probably ignore most of it heheh

Happy Torrennting!!

PS. if you have not found all the stuff you need on
CODE
http://suprnova.org
(you may notice they don't serve porn) then you may wanna give one of these links a go
CODE
http://members.lycos.nl/gettorrents/index.php?


PPS. suprnova.org down? try one of the mirrors (google is your friend "suprnova.org mirrors" and finally sometimes the mirror works but the torrent does not, in this case try to modify the link to the torrent to point to another one of suprnova's mirrors. for example if the link says
CODE
yellowhouse.com/suprnova/torrents/smellytorrent.torrent try altering it to phobal.ca/suprnova/torrents/smellytorrent.torrent


CODE
http://www.btsites.tk/

CODE
http://www.torrentbox.com/

CODE
http://isohunt.com/
[<-= submitted by LanoX] (a good IRC & Bittorent Search Engine

Thanks to OLI who wrote these guides.

Read Users' Comments (0)

LeTs BuZz FuNnY!

Here are some bunch of funny ringtones for mobile phn

  • download them to ur pc
  • save them
  • use ur phns pc suit to transfer the tones(tones are in mp3 format)
here are the links.....enjoy
scary movie
HIJRA
my fav
dady phone ringing

Read Users' Comments (0)

tips for some Computer geek

this tutorial will help you to Optimise your virtual memory
*******
Virtual Memory Optimization Guide Rev. 4.0 - Final

Virtual Memory

Back in the 'good old days' of command prompts and 1.2MB floppy disks, programs needed very little RAM to run because the main (and almost universal) operating system was Microsoft DOS and its memory footprint was small. That was truly fortunate because RAM at that time was horrendously expensive. Although it may seem ludicrous, 4MB of RAM was considered then to be an incredible amount of memory.

However when Windows became more and more popular, 4MB was just not enough. Due to its GUI (Graphical User Interface), it had a larger memory footprint than DOS. Thus, more RAM was needed.

Unfortunately, RAM prices did not decrease as fast as RAM requirement had increased. This meant that Windows users had to either fork out a fortune for more RAM or run only simple programs. Neither were attractive options. An alternative method was needed to alleviate this problem.

The solution they came up with was to use some space on the hard disk as extra RAM. Although the hard disk is much slower than RAM, it is also much cheaper and users always have a lot more hard disk space than RAM. So, Windows was designed to create this pseudo-RAM or in Microsoft's terms - Virtual Memory, to make up for the shortfall in RAM when running memory-intensive programs.



How Does It Work?

Virtual memory is created using a special file called a swapfile or paging file.

Whenever the operating system has enough memory, it doesn't usually use virtual memory. But if it runs out of memory, the operating system will page out the least recently used data in the memory to the swapfile in the hard disk. This frees up some memory for your applications. The operating system will continuously do this as more and more data is loaded into the RAM.

However, when any data stored in the swapfile is needed, it is swapped with the least recently used data in the memory. This allows the swapfile to behave like RAM although programs cannot run directly off it. You will also note that because the operating system cannot directly run programs off the swapfile, some programs may not run even with a large swapfile if you have too little RAM.


Swapfile Vs. Paging File

We have all been using the terms swapfile and paging file interchangeably. Even Microsoft invariably refers to the paging file as the swapfile and vice versa. However, the swapfile and paging file are two different entities. Although both are used to create virtual memory, there are subtle differences between the two.

The main difference lies in their names. Swapfiles operate by swapping entire processes from system memory into the swapfile. This immediately frees up memory for other applications to use.

In contrast, paging files function by moving "pages" of a program from system memory into the paging file. These pages are 4KB in size. The entire program does not get swapped wholesale into the paging file.

While swapping occurs when there is heavy demand on the system memory, paging can occur preemptively. This means that the operating system can page out parts of a program when it is minimized or left idle for some time. The memory used by the paged-out portions are not immediately released for use by other applications. Instead, they are kept on standby.

If the paged-out application is reactivated, it can instantly access the paged-out parts (which are still stored in system memory). But if another application requests for the memory space, then the system memory held by the paged-out data is released for its use. As you can see, this is really quite different from the way a swapfile works.

Swapfiles were used in old iterations of Microsoft Windows, prior to Windows 95. From Windows 95 onwards, all Windows versions use only paging files. Therefore, the correct term for the file used to create virtual memory in current operating systems is paging file, not swapfile.

Because both swapfiles and paging files do the same thing - create virtual memory, people will always refer to swapfiles and paging files interchangeably. Let's just keep in mind their innate differences.


Do We Still Need A Paging File?

Even today, when the average home user's computer comes with at least 256MB of RAM, the paging file is still very important. While the large amount of RAM in the average user's computer makes the risk of memory shortage much less of a worry with single applications now than it was back then; the paging file is essential when multi-tasking.

Note that over the years, the emphasis has changed to multi-tasking. No longer will people be solely stuck to using one application at a time. In fact, it is common to have 10 or more applications running simultaneously!

For example, I normally have the following applications running at the same time :-

+ Microsoft Outlook
+ Internet browsers like Maxthon and Firefox
+ An FTP client
+ Instant messengers like Trillian and MSN Messenger
+ A download manager like FlashGet
+ Macromedia Dreamweaver
+ P2P clients like ShareAza
+ An antivirus software
+ Adobe Acrobat Reader with a few PDF documents opened

That's a total of 10-12 applications running simultaneously!

Even with 256MB of RAM, it would be impossible to load everything into memory. A paging file is needed to store the least used data in the memory so that I can open up all those applications I need. And let's not forget the disk cache.

Operating systems like Windows 98 and Windows XP allocate a sizeable portion of the RAM to the disk cache. This speeds up accesses to hard disk data by caching the most frequently used as well as data that are most likely to be accessed next by the computer. This cuts down on the amount of available RAM. So, without a paging file, you won't be able to open many applications even if your computer has 256MB of RAM.

Finally, some programs require the use of a paging file to function properly. It may be to store sensitive data on something less volatile than the RAM or to ensure the computer will have sufficient memory to run it. But whatever the reasons, a paging file is needed in order for these programs to run.


Why Optimize The Paging File?

Unless your computer is truly loaded with RAM, it will almost always use the paging file. As such, its performance affects the performance of the whole computer.

Now, using a paging file may sound like a really cheap way to run memory intensive programs without the expense of buying more RAM. However, even the fastest hard disk is more than an order of magnitude slower than the slowest RAM.

Even the fastest hard disk is currently over 70X slower than the dual-channel PC2700 DDR memory common in many computers. Let's not even start comparing the hard disk with faster RAM solutions like PC3200 DDR memory or PC2-4200 DDR2 memory.

So, paging file is only a stopgap solution for the lack of sufficient RAM. As long as you use the paging file, there will always be performance degradation. The ideal solution for insufficient RAM is always more RAM, not more Virtual Memory. But since we can't afford all the RAM we want, a paging file is necessary for us to run today's memory guzzling programs.

As you can tell, more isn't better for the paging file because more paging file space will only give you the ability to run more memory intensive programs at once. It will not speed up your system. But what we can do is to optimize the paging file so that the performance degradation when using it is minimized.



So How Do We Optimize The Paging File?

There have been many theories on how to optimize the paging file. The most important ones are listed below :-

+ Making the paging file contiguous.
+ Moving the paging file to the outer tracks of the hard disk.
+ Creating a huge paging file.
+ Moving the paging file to a different partition in the same hard disk.
+ Moving the paging file to a different hard disk.
+ Creating multiple paging files
+ Moving the paging file to a RAID array
+ Moving the paging file to a RAM drive
+ Reducing reliance on virtual memory

We will examine those methods and see what will work and what won't.

Virtual Memory Then And Now

Windows 3.1

Back in the good old days of DOS 6.22 and Windows 3.1, everyone knew that creating a permanent swapfile was the key to optimal swapfile performance. This was because Windows 3.1 only creates permanent swapfiles that are contiguous.

A contiguous swapfile is a swapfile that consists of an uninterrupted block of hard disk space. When a swapfile is contiguous, the read-write heads of the hard disk can read and write data on the swapfile in a continuous fashion.

In Windows 3.1, if the swapfile was set up as a temporary swapfile which is created every time Windows 3.1 boots up, it will end up at the end of the hard disk and fragmented too. So, every time the swapfile is read from or written to, the hard disk heads have to seek all over the platters to conduct those operations.

Needless to say, this greatly erodes the performance of the swapfile. That's why it was important to make the swapfile permanent in Windows 3.1 - so that the swapfile will become contiguous.



Windows 95 And Above

From Windows 95 onwards, Microsoft encouraged the use of its new dynamic virtual memory system. Of course, there is nothing new about the virtual memory part but the keyword in this new technique is dynamic.

The new dynamic virtual memory system no longer relies on a fixed-size swapfile but a paging file that dynamically resizes itself according to need. When the computer runs out of memory, more memory is created by increasing the size of the paging file. Once the virtual memory is freed up, theoretically the paging file diminishes in size.

Microsoft claims that while its dynamic virtual memory system will create a fragmented paging file, it is still faster than Windows 3.1's static virtual memory system. As a bonus, no hard disk space will be tied up in a permanent paging file.

However, this dynamic virtual memory system does have a big disadvantage - it cannot be moved to the outer tracks of the hard disk platters.

Dynamic Paging Files And Data Locality

There are people who assert that when left alone, Windows XP will actually create virtual memory pages in close proximity to frequently-used data in the hard disk, like documents. In other words, they claim that Windows XP monitors disk usage, maintains a database of frequently-used files in the hard disk and uses that information to create the paging file based on spatial locality.

With virtual memory pages created close to frequently-used data, this apparently allows shorter seeks between frequently-used data and the paging file. That is the premise behind their theory of letting Windows XP handle the paging file dynamically. However, I don't think this is true at all.

First of all, while Windows XP does monitor disk usage and maintain a database of frequently-used files, only disk defragmenting utilities use that database. The built-in Disk Defragmenter, as well as third-party disk defragmenting utilities, use this database to rearrange the hard disk so that frequently-used data.

But as far as I'm aware, the paging file does not arrange the location of the pages according to this database. From my observations, Windows XP simply uses the nearest available clusters for the dynamic paging file.

In fact, Microsoft states that if you create multiple paging files, Windows XP will favour the partition that is least active. This completely refutes the theory of virtual memory pages being allocated according to spatial locality. Here is a quote from Microsoft's Knowledge Base.

By design, Windows uses the paging file on the less frequently accessed partition over the paging file on the more heavily accessed boot partition. An internal algorithm is used to determine which paging file to use for virtual memory management.

In any case, it doesn't make sense for Windows XP to create the paging file based on spatial locality to work files like your documents. Once opened, Windows keeps the working copy in the Temp folder, not your paging file.

In addition, let us remember that Windows pre-emptively pages out pageable parts of an application in system memory. Windows does not directly load data from the hard disk into the paging file. Therefore, creating virtual memory pages close to frequently-used files will not help at all.

Fragmented Vs. Contiguous

Even though Microsoft asserts that the new dynamic virtual memory system does not benefit much from a contiguous paging file, the fact is maintaining a contiguous paging file will definitely improve the paging file's performance.

A contiguous paging file eliminates the need for the hard disk heads to seek all over the platters while accessing the paging file. The following pictures illustrate my points.


This shows a fragmented paging file (brown)


This shows a contiguous paging file (brown)

See how a contiguous paging file differs from a fragmented paging file? Instead of seeking and reading from a continuous block of hard disk space in the case of a contiguous paging file, the hard disk heads have to seek all over the platters to access the clusters allocated to a fragmented paging file.

As a result, a common operating pattern like the following may emerge :-

Fragmented : seek-read-read-seek-read-seek-read-read-read-seek-read-read-seek
Contiguous : seek-read-read-read-read-read-read-seek-read-read-read-read-read

Of course, the amount of time needed to do the seek operation is different from the time needed to read a block of data off the paging file but the logic remains.

A contiguous paging file allows data to be read with minimal amount of seeks. If the number of seeks can be reduced while accessing the paging file, then more data can be read in less time. This is the premise behind a contiguous paging file.

How Do We Create A Contiguous Paging File?

Now that we agree that making the paging file contiguous will greatly improve its performance, let's figure out how to make it contiguous.



Using A Permanent Paging File

Yes, I know. You are all thinking, "Simple! Just make the paging file permanent!"

True, creating a permanent paging file is usually the way to create a contiguous paging file. A permanent paging file ensures that the paging file will remain in one single block. However, creating a permanent paging file does not mean the paging file will automatically become contiguous.

That may have been true in Windows 3.1 but believe it or not, Windows XP does not force the creation of a contiguous paging file when you make the paging file permanent!

When you create a permanent paging file, Windows XP automatically uses the nearest (to the outer tracks) sectors to create the paging file. This creates a permanent but fragmented paging file. Naturally, this reduces the performance of the paging file.

But that's not the end of the world. To avoid this problem, defragment your hard disk before creating the permanent paging file. That will create a contiguous area for Windows XP to create a permanent paging file.



Using A Dynamic Paging File

But making a permanent paging file is not the only way to create a contiguous paging file. You can also create a contiguous paging file that is also dynamic in nature!

All you need to do is create a separate partition for the paging file. This allows the paging file a contiguous space on the hard disk to freely expand according to usage.

At first glance, the benefits of this method seem obvious. It ensures the paging file is always contiguous and yet have the ability to expand when the need arises. However, this method is really not very desirable when you examine it closely.

The main reason for using a dynamic paging file is to save hard disk space by using it only when there is a need for more virtual memory. But creating a partition to allow the paging file to dynamically resize is really defeating the purpose.

The size of the partition limits the maximum size that the dynamic paging file can expand to and you cannot use the partition to store anything else because that would interfere with its contiguity. If you create a big partition, that wastes hard disk space. If you create a small partition, that limits how big the paging file can expand. Therefore, this method is self-defeating.


What About A Semi-Permanent Paging File?

Although everyone knows about dynamic and permanent paging files, there is a third type of paging file - a semi-permanent paging file.

A semi-permanent paging file theoretically allows you to receive the performance benefits of a contiguous permanent paging file without its main disadvantage - the need to predetermine an optimal size. But what is a semi-permanent paging file?

Well, a semi-permanent paging file is a combination of a permanent paging file and a dynamic paging file. It consists of a permanent part and a dynamic part. The permanent part of this paging file behaves exactly like a permanent paging file. It will not change in size and can thus be moved to the outer tracks of the hard disk.

The dynamic part, however, does not normally appear. In fact, it is only created when the permanent part of the semi-permanent paging file is unable to cope with increased memory requirements.

Once created, it dynamically resizes itself to suit the current paging file requirements. Just like the dynamic paging file, it will use any free space on the hard disk so it will be fragmented.



The Advantages Of A Semi-Permanent Paging File

The semi-permanent paging file offers the advantage of never running out of virtual memory. That means even if the permanent part cannot handle the memory load, the application won't halt with an "Out of memory" error message. The dynamic part will come into action and provide the extra virtual memory required by the application.

With a permanent paging file, the application will just halt with the error message and you would have to close one or more applications to free up more memory. However, this is only true for older versions of Windows.

Newer iterations of Windows like Windows XP do not have a true permanent paging file. Even if you set a permanent paging file, Windows XP will automatically generate more virtual memory when it runs out of memory; by adding a dynamic component to the permanent paging file. In short, when you create a "permanent" paging file in Windows XP, you are actually creating a semi-permanent paging file.

The advantage of creating your own semi-permanent paging file, instead of a "permanent" paging file in Windows XP is that you get to avoid the warning message that appears whenever Windows XP runs out of memory and has to create more virtual memory by adding a dynamic component to the permanent paging file.


The Disdvantages Of A Semi-Permanent Paging File

Unfortunately, a semi-permanent paging file is a double-edged sword. With a dynamic component, it is inevitable that a dynamic paging file's disadvantages would also be applicable to it.

As mentioned earlier, the dynamic part will use any available space on the hard disk. This inevitably means the dynamic part of the semi-permanent paging file will always be fragmented. Naturally, the performance of the paging file deteriorates whenever the dynamic part comes into action.

But just how bad could be the deterioration be? Let's take a look at the disk map below which shows a semi-permanent paging file with both permanent and dynamic components in brown :-


This shows a semi-permanent paging file (brown)

You will notice that the paging file is split into two parts. The permanent part is at the outer tracks of the hard disk in one contiguous block. The lower, fragmented blocks of paging file are the dynamic part of the semi-permanent paging file. As the paging file requirement exceeds what the permanent part can provide, the dynamic part of the semi-permanent paging file will dynamically convert available hard disk space (which is usually on the inner tracks on the hard disk) into virtual memory.

Because the paging file's two components are at opposite ends of the hard disk, the hard disk heads will have to seek up and down the platters while servicing the paging file! Needless to say, that greatly degrades the performance of the paging file. The head seeks required to service a dynamic paging file are already bad enough. The amount of head seeks required to service both the permanent part and the fragmented dynamic part will definitely put a big dent on the paging file's performance.

Permanent Or Semi-Permanent?

Performance-wise, both a permanent and a semi-permanent paging file will perform equally, if the virtual memory requirement does not exceed what the permanent component of the semi-permanent paging file can provide. As the dynamic part comes into play, the semi-permanent paging file gradually loses its performance advantage over the dynamic paging file. Eventually, it may even become slower than a dynamic paging file.

The only way around this is to ensure that the permanent part of the semi-permanent paging file is enough to meet your usual virtual memory requirements. Do not look at the semi-permanent paging file as a way to save hard disk space. Instead, think of it as a permanent paging file with a backup capacity for dynamic expansion in emergencies!

Hard disk space is no longer that much of a premium as it was back in the old days. With desktop hard disks approaching half a terabyte in size, allocating a few hundred megabytes or even a gigabyte or so for the paging file isn't going to break anyone's heart. The performance of the paging file, especially in systems with very little RAM or for people who multitask a lot, is definitely more important than saving a few hundred megabytes of hard disk space.

Is Writing And Rewriting To The Same Area Dangerous?

Creating a permanent or semi-permanent paging file inevitably causes numerous writes and rewrites of information in the same fixed area of the hard disk platters. Compared to other areas of the hard disk, the space allocated to the paging file will be the area where data is most often written, deleted and replaced with newer data.

Some users have expressed concern over this fact. Will the platter media in that area get worn out after continuous use? Like the magnetic cassettes that we used to record our favourite songs? Will bad sectors form in that area like the floppy disks that have been written to once too often?

Well, unlike magnetic cassettes or floppy disks, there is actually no contact between the hard disk read-write heads with the platters. The read-write heads actually fly over the platters on a thin cushion of air. In fact, at the high speed that the platters are spinning at, any contact between a read-write head with a platter would have resulted in a head crash, with disastrous consequences.

Therefore, friction isn't the concern here. What about the effect of changing the magnetic properties of the media during the write process? Will the magnetic properties of the media deteriorate after too many of such changes? Or in the context of this article, will creating a permanent paging file damage the drive in the long run and reduce its MTBF (Mean Time Between Failures)?

To obtain a definitive answer to these questions, I contacted IBM and Seagate. Let's see what their technical experts have to say.

Seagate

This should not hurt the drive at all. As you are aware, the heads are actually suspended above the platters on an air bearing, so there is no direct contact with the media. As far as the recording and re-recording of the same tracks, also no problems. What we are dealing with here in order to write the data is simply moving the magnetic domain one way or the other, no wear involved.

Regards,

Bob
Seagate Tech Support



IBM

Remember, the heads truly fly above the media. The wear and tear factor only becomes an issue for bearings (heat) and physical damage to the media if the drive is shocked during operation. Performance is best at the outer tracks of the drive, so any recurring access directed there will benefit you in performance. Writing and rewriting data to a drive is good in that it remagnetizes (refreshes) the area every time it is written.

To answer your question: Your swap file will not affect the MTBF of your drive.

Don Gardner
IBM Hard Disk Technical Support/SIT Lab



So, Are Multiple Writes To The Same Area Good?

Well, it appears to be so. From what Don Gardner said, I gather that the signal carried by the media weakens with time and rewriting it refreshes and strengthens the signal strength of the data carried by the media.

I guess that pretty much answers our questions. Creating a permanent or semi-permanent paging file won't harm your drive. In fact, it might even be good for your data!


Creating A Permanent Paging File In Windows 9x

Luckily, Microsoft gave us a relatively painless way to create a permanent paging file though the proper directions were not included. Fear not however. This is what guides like this are for.

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties. Once in System Properties, click on the Performance tab and you'll see the following picture :-

Right at the bottom, you'll see a Virtual Memory... button. Click on it to get the following screen :-

By default, it is set to Let Windows manage my virtual memory settings. (Recommended). Ignore the Recommended label and select Let me specify my own virtual memory settings.

Now, you will be allowed to choose the partition you wish to place the paging file in. We will touch on this later.

Next up is the minimum and maximum values for the paging file. To create a permanent paging file, both values must be the same. You would think that Microsoft could at least post a notice about that.

Please note that Windows 95/98 will not automatically add a dynamic component to a permanent paging file. If you run out of memory with a permanent paging file, it will halt the application and generate the "Out of memory" error message.

Naturally, you will have to decide on a size for the paging file. We will be discussing this later in the guide but in this example, we will use an arbitrary value of 150MB. Once you set the two values, click on OK and then let Windows 95/98 reboot the system. A permanent paging file will be created on your hard disk.

For the curious, do not click on Disable virtual memory. (Not recommended) because that will force Windows 95/98 to use only physical RAM.

Creating A Semi-Permanent Paging File In Windows 9x

Creating a semi-permanent paging file is rather similar to creating a permanent paging file.

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties.

Once in System Properties, click on the Performance tab and you'll see the following picture :-

Right at the bottom, you'll see a Virtual Memory... button. Click on it to get the following screen :-

By default, it is set to Let Windows manage my virtual memory settings. (Recommended). Ignore the Recommended label and select Let me specify my own virtual memory settings.

Now, you will be allowed to choose the partition you wish to place the paging file in. We will touch on this later.

To create a semi-permanent paging file, you will need to set both the minimum and maximum values. They must not be the same. If they are the same values, then the paging file becomes a permanent paging file.

The minimum value determines the size of the permanent component of the semi-permanent paging file. The maximum value determines the maximum size of the paging file (both permanent and dynamic components) and thus limits how much the dynamic component can expand.

In the example above, Windows 98 will create a permanent paging file of 150MB when it starts up. But if the paging file cannot meet the memory demands of the computer, it will dynamically expand the paging file, up to a maximum of 6692MB.

It is highly recommended that you create a large permanent component that will meet all of your usual memory needs. Use the dynamic component as a backup for emergencies.

Once you set the two values, click on OK and then let Windows 95/98 reboot the system. A permanent paging file will be created on your hard disk. Please note that the dynamic component of the paging file will only become active after the system's virtual memory requirements exceed the minimum value.

For the curious, do not click on Disable virtual memory. (Not recommended) because that will force Windows 95/98 to use only physical RAM.

Creating A Permanent Paging File In Windows 2000

In Windows 2000, it takes a little bit more digging to get where you want.

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties.

Once in System Properties, click on the Advanced tab. There will be three options. Click on Performance Options... and you'll see the following picture :-

The second section you see is titled Virtual Memory. Under it, there's a Change... button. Click on it to get the following screen :-

By default, there won't be any values set for both the Initial size (MB) and the Maximum size (MB) options.

You can select the partition you wish to place the paging file in by clicking on the list of partitions shown on the screen. Again, the selection of partition will be discussed in detail later in this article.

To create a permanent paging file, both values for the Initial size and the Maximum size must be the same.

Please note that Windows 2000 will not automatically add a dynamic component to a permanent paging file. If you run out of memory with a permanent paging file, it will halt the application and generate the "Out of memory" error message.

Naturally, you will have to decide on a size for the paging file. We will be discussing this later in this article but for now, we will use an arbitrary value of 150MB. Once you set the two values, click on OK and then let Windows 2000 reboot the system. A permanent paging file will be created on your hard disk.

You will note that Windows 2000 does not allow a paging file size of less than 2MB.

Creating A Semi-Permanent Paging File In Windows 2000

Again, it's almost similar to creating a permanent paging file.

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties.

Once in System Properties, click on the Advanced tab. There will be three options. Click on Performance Options... and you'll see the following picture :-

The second section you see is titled Virtual Memory. Under it, there's a Change... button. Click on it to get the following screen :-

By default, there won't be any values set for both the Initial size (MB) and the Maximum size (MB) options.

You can select the logical drive you wish to place the paging file in by clicking on the list of logical drives shown on the screen. Again, the selection of logical drives will be discussed in detail later in this article.

To create a semi-permanent paging file, you will need to set both the minimum and maximum values. They must not be the same. If they are the same values, then the paging file becomes a permanent paging file.

The minimum value determines the size of the permanent component of the semi-permanent paging file. The maximum value determines the maximum size of the paging file (both permanent and dynamic components) and thus limits how much the dynamic component can expand.

In the example above, Windows 2000 will create a permanent paging file of 150MB when it starts up. But if the paging file cannot meet the memory demands of the computer, it will dynamically expand the paging file, up to a maximum of 1422MB.

It is highly recommended that you create a large permanent component that will meet all of your usual memory needs. Use the dynamic component as a backup for emergencies.

Once you set the two values, click on OK and then let Windows 2000 reboot the system. A permanent paging file will be created on your hard disk. Please note that the dynamic component of the paging file will only become active after the system's virtual memory requirements exceed the minimum value.

You will note that Windows 2000 does not allow a paging file size of less than 2MB.

Creating A Permanent Paging File In Windows XP

Like in Windows 2000, it takes a little digging in Windows XP to get where you want.

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties.

Once in System Properties, click on the Advanced tab. There will be three sections.

Click on Settings in the Performance section and the Performance Options screen will pop up. Click on the Advanced tab and you'll see the following picture :-

The second section you see is titled Virtual memory. Under it, there's a Change button. Click on it to get the following screen :-

You can select the logical drive you wish to place the paging file in by clicking on the list of logical drives shown on the screen. Again, the selection of logical drives will be discussed in detail later in this article.

To create a permanent paging file, both values for the Initial size and the Maximum size must be the same.

Please note that Windows XP will dynamically expand the paging file when you run out of memory, even if you create a permanent paging file. When this happens, you will get an error message telling you that Windows XP is trying to expand the paging file to create more virtual memory.

In this example, we are using an arbitrary value of 512MB. Once you set the two values, click on OK and then let Windows XP reboot the system. A permanent paging file will be created on your hard disk.

You will note that Windows XP does not allow a paging file size of less than 2MB.

Creating A Semi-Permanent Paging File In Windows XP

Creating a semi-permanent paging file is rather similar to creating a permanent paging file.

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties.

Once in System Properties, click on the Advanced tab. There will be three sections.

Click on Settings in the Performance section and the Performance Options screen will pop up. Click on the Advanced tab and you'll see the following picture :-

The second section you see is titled Virtual memory. Under it, there's a Change button. Click on it to get the following screen :-

You can select the partition you wish to place the paging file in by clicking on the list of partitions shown on the screen. Again, the selection of partition will be discussed in detail later in this article.

To create a semi-permanent paging file, you will need to set both the minimum and maximum values. They must not be the same. If they are the same values, then the paging file becomes a permanent paging file.

The minimum value determines the size of the permanent component of the semi-permanent paging file. The maximum value determines the maximum size of the paging file (both permanent and dynamic components) and thus limits how much the dynamic component can expand.

In the example above, Windows XP will create a permanent paging file of 512MB when it starts up. But if the paging file cannot meet the memory demands of the computer, it will dynamically expand the paging file, up to a maximum of 768MB.

It is highly recommended that you create a large permanent component that will meet all of your usual memory needs. Use the dynamic component as a backup for emergencies.

Once you set the two values, click on OK and then let Windows XP reboot the system. A permanent paging file will be created on your hard disk. Please note that the dynamic component of the paging file will only become active after the system's virtual memory requirements exceed the minimum value.

You will note that Windows XP does not allow a paging file size of less than 2MB.

Making The Paging File Contiguous

After creating a permanent or semi-permanent paging file, check and make sure it is contiguous. You can ensure it is contiguous by defragmenting the hard disk before you creating the permanent or semi-permanent paging file. However, that does not always work.

In such cases, you will need to defragment the paging file after it is created. Unfortunately, Windows XP's Defrag utility does not have the ability to defragment the paging file. You will have to use a third-party defragmentation utility to do this. I will use Diskeeper as an example.

Windows NT, 2000 and XP does not allow the paging file to be defragmented while it is in used. Therefore, you must set Diskeeper to move the paging file during the next reboot.

Run Diskeeper and click on Change your settings to expand its menu. You will see the screen below.

Look for and click on Set a boot-time defragmentation. That will display this screen.

Now, select the partition where the paging file resides and tick the checkbox of Defragment the paging file option. The option will be grayed out if there is no paging file in that partition.

Then click OK and reboot the computer. Diskeeper will load up during the boot process and defragment the paging file.

Once Diskeeper has completed its operation, Windows XP will boot up and start using the newly optimized paging file that is contiguous.

Please note that Diskeeper requires a certain amount of free space to defragment the paging file. If you do not have the necessary amount of free space in that partition, then Diskeeper may not defragment the paging file.

Wanna Do It For Free?

You can easily do this for free. Just download a trial copy of Diskeeper!

Moving The Paging File To The Outer Tracks

Moving the paging file to the outer tracks is a powerful way of increasing paging file performance. In fact, it will give the paging file a bigger boost in performance than just making it contiguous. Why is that?

Check out this transfer rate graph of a hard disk :-

It shows pretty clearly the transfer rate of a hard disk is highest on the outer tracks and lowest on the inner tracks. In this case, the transfer rate of the inner tracks is only about half the transfer rate of the outer tracks.

The areal density of a hard disk's platters and its spin rate are constant. But the linear velocity at each point of the platter isn't constant. Therefore, the performance of the paging file depends on where it is located on the hard disk.

The time taken for the hard disk head to read from point A to point B is exactly the same as the time taken for the head to read from C to D. But because the areal density of the platter is constant, a lot more data can be read from the outer tracks than from the inner tracks, in the same amount of time.

Now that the outer tracks have been proven to be the fastest area on a hard disk, we can use that to our advantage. By moving the paging file to the outer tracks, we give the paging file a major boost in performance.

As you can see from the example above, the transfer rate at the outer tracks are about 59MB/s while the central and inner tracks have transfer rates of about 49MB/s and 30MB/s respectively. Moving the paging file from the inner tracks to the outer tracks will almost double its performance! Even moving the paging file from the central tracks to the outer tracks will give the paging file a transfer rate boost of 20%.

But please note that this method must be used in conjunction with a permanent paging file. This is because the paging file cannot be moved to the outer tracks of the hard disk unless it is a permanent paging file.

How Do We Move The Paging File To The Outer Tracks?

Before you can move the paging file to the outer tracks, you must first make the paging file permanent. Follow the steps outlined in the previous pages. Once you have a permanent paging file, you can use your favourite hard disk defragmentation utility to move the paging file to the outer tracks.

Unfortunately, Windows XP's Defrag utility does not have the ability to move paging file to the outer tracks. You will have to use a third-party defragmentation utility to move the paging file to the outer tracks. I will use Diskeeper as an example.

Windows NT, 2000 and XP does not allow the paging file to be moved while it is in used. Therefore, you must set Diskeeper to move the paging file during the next reboot.

Run Diskeeper and click on Change your settings to expand its menu. You will see the screen below.

Look for and click on Set a boot-time defragmentation. That will display this screen.

Now, select the partition where the paging file resides and tick the checkbox of Defragment the paging file option. The option will be grayed out if there is no paging file in that partition.

Then click OK and reboot the computer. Diskeeper will load up during the boot process and defragment the paging file. It will also move the paging file to the outer tracks.

Once Diskeeper has completed its operation, Windows XP will boot up and start using the newly optimized paging file that is not only contiguous but also located in the outer tracks of the hard disk! Your paging file will now show a marked boost in performance.

Please note that you cannot actually force Diskeeper to move the paging file right up to the outermost tracks. Diskeeper has an internal algorithm that determines which files are best placed in the outermost tracks for optimal performance.

In addition, Diskeeper requires a certain amount of free space to defragment and move the paging file. If you do not have the necessary amount of free space in that partition, then Diskeeper may not defragment the paging file or move it to the outer tracks.

Creating A Huge Paging File

Because games and applications often list a minimum paging file size, many people equate the size of the paging file with performance, just like they would with anatomy. But at least in the first case, that's not true.

What does a bigger paging file get you? Well, it gives you the ability to run more memory-intensive programs concurrently. But does a larger paging file make virtual memory faster or better? Unfortunately, the answer is no.



Why Not?

First of all, creating a large amount of virtual memory doesn't mean the operating system will use it all. Although Windows will pre-emptively page out parts of idle applications, there are limits to how much it can page out for each application. Therefore, creating an excessively large paging file will just waste hard disk space.

Second, if you ever move the paging file to the outer tracks of the hard disk, an excessively large paging file will take up outer track space that could have been used to store system or application files. Look at these two pictures :-


Hard disk with a 2GB paging file (brown)


Hard disk with a 600MB paging file (brown)

The first one has a huge 2GB paging file while the second has a smaller 600MB paging file. For many systems, 600MB of virtual memory is more than enough to multitask 7 or 8 applications at the same time or run the most memory-intensive 3D games out there. So, anything more is just taking space.

The extra space taken up by an excessively large paging file on the outer tracks could have been used to store system or application files for faster access. The amount of space regained from using a smaller page file can be seen as a red block in the second picture. You can bet on a faster loading time for Windows and other applications if you limit the size of your paging file.

Therefore, the trick here is to gauge the maximum size of the paging file that you will ever need. This way, you will not create an excessively large paging file that wastes hard disk space and takes up the precious space on the outer tracks away from the system and application files.

How Large Should The Paging File Be?

That's a question that has bugged many users. Since the good old days of DOS and Windows 3.1, many users have staunchly adhered to an old rule of the thumb that the swapfile should be 2.5 x the amount of RAM.

In fact, whenever I visit other forums, I still notice many people quoting this old "rule". The question is - is this rule still applicable for today's systems and operating systems? Unfortunately, it's a big NO!



Why Not 2.5 x RAM?

Back in the Windows 3.1 days, computers only came with 4MB or 8MB of RAM. 16MB of RAM was considered a luxury in those days. I remember running Windows 3.1 on an Intel i386SX-16 machine with just 4MB of RAM!

Because RAM in those days was horrendously expensive and only a limited amount of it was available in most systems, a relatively large swapfile was needed. A swapfile size of 2.5 times the system RAM wasn't a lot, considering the fact that most systems came with only 4MB or 8MB of RAM. That would only amount to a swapfile size of 10MB to 20MB, which enabled most systems to run Windows 3.1 applications comfortably.

But today, most computers come with at least 512MB of RAM and many have 1GB of RAM! If the 2.5X rule was applied, that would result in "optimal" paging file sizes of 1.28GB to 2.5GB! That doesn't make sense at all.

The purpose of buying more memory is to prevent the system from using the slower virtual memory. The more memory you buy, the less you need to use virtual memory. It doesn't make sense to increase the paging file size every time you increase the amount of RAM in your system!

Imagine if you have follow the rule when you upgrade to 2GB of RAM in the future... You would have to create a 5GB paging file! That's ridiculous.

The amount of hard disk space you dedicate to a paging file should depend on the amount of RAM you need to use, NOT the amount of RAM you have. The 2.5 x system RAM rule was flawed from the beginning and it is certainly not applicable today.

Do not use the 2.5 x system RAM rule to determine the size of your paging file. Instead, you should first gauge how much virtual memory is actually needed by the system during the heaviest memory load. Then use your finding to set the most appropriate paging file size for your system.

But You Require A Huge Paging File For A Memory Dump!

There are people who actually believe in increasing the size of the paging file following an increase in system memory. That certainly goes against what we have been recommending, doesn't it? The reason is simple.

Whenever Windows crashes, it first writes the memory contents to the paging file. After the computer is restarted, Windows will create a memory dump file using the memory contents stored in the paging file. This memory dump file is used to analyze the cause of the crash.

However, for a complete memory dump to created, the paging file size should be large enough to store all the contents of the system memory. That's why the paging file size has to meet this equation :-

Paging file size = Physical memory in the system + 1MB

So, if you have 1024MB of memory, the paging file size should be 1025MB in size for a complete memory dump to be created successfully.

However, this does not mean you should increase the size of your paging file according to the amount of system memory. Why not? Let's find out.



Why Not?

First of all, there is no need to create a complete memory dump. Windows supports three different kinds of memory dumps. Here is a summary of information from Microsoft's Knowledge Base.
Type Of Memory Dump

Description

Size
Small


* Small memory dump files contain the least information, but consume the least disk space, 64 kilobytes (KB).
* Unlike kernel and complete memory dump files; Windows XP stores small memory dump files in the systemroot\Minidump folder, instead of using the systemroot\Memory.dmp file name.
* Windows XP always create a small memory dump file when a Stop error occurs, even when you choose the kernel or complete memory dump file options.
* One of the services that use small memory dump files is the Error Reporting service. The Error Reporting service reads the contents of a small memory dump file to help diagnose problems that cause Stop errors.


64KB
Kernel


* This is an intermediate size dump file that records only kernel-level memory and can occupy several megabytes (MB) of disk space.
* When a Stop error occurs, Windows XP Professional saves a kernel memory dump file to a file named systemroot\Memory.dmp and create a small memory dump file in the systemroot\Minidump folder.
* You cannot exactly predict the size of a kernel memory dump file because this depends on the amount of kernel-mode memory allocated by the operating system and drivers present on the machine when the Stop error occurred.


About 1/3 of system memory
Complete


* A complete memory dump file contains the entire contents of physical memory when the Stop error occurred.
* The file size is equal to the amount of physical memory installed plus 1 MB.
* When a Stop error occurs, the operating system saves a complete memory dump file to a file named systemroot\Memory.dmp and creates a small memory dump file in the systemroot\Minidump folder.


System memory + 1MB

Although you may think that it is always good to create a complete dump file. However, that is not true. Even Microsoft recommends creating a kernel memory dump, instead of a complete memory dump. Why? I'll quote them :-

For most purposes, a kernel memory dump file is the most useful kind of file for troubleshooting Stop messages. It contains more information than the small memory dump file and is significantly smaller than the complete memory dump file. It omits only those portions of memory that are unlikely to have been involved in the problem.

In addition, a kernel memory dump will require the paging file to be only about 1/3 of the system memory. It will also require the same amount of free hard disk space.

Even if you wish to create a complete memory dump, there is still no need to create a large paging file. Even if you restrict your paging file to, for example, 500MB; Windows XP will automatically expand the paging file to store the memory dump BEFORE it is written out to disk on the next reboot.

Therefore, I consider it to be a real waste of hard disk space if you have 2GB of memory and yet create a 2GB paging file, just so Windows XP can write an enormous memory dump the next time it crashes.

How Much Virtual Memory Do I Need?

No one can tell you how much hard disk space you need to allocate to a permanent paging file because every system is different and everyone uses his/her system differently.

If you create a permanent paging file that is too small, then Windows will continuously create more virtual memory via a dynamic extension to the permanent paging file. This reduces the paging file's performance

If you create a permanent paging file that is too large, you are only wasting hard disk space, especially on the outer tracks.

So, the best method would be to accurately gauge how much virtual memory you actually need. This allows you to create a permanent paging file with the appropriate size. To do that, you need to monitor your paging file usage. Let's see how you can do that.



Finding Out In Windows 9x

Give your system a clean boot and once you are in Windows 95/98, load System Monitor. You can get to it via Start Menu > Programs > Accessories > System Tools. You will see this screen :-

Go to the Edit menu and click on Add Item...

In the next screen, select the Memory Manager category and add Swapfile in use. Click OK and you will see this screen :-

Now, you can monitor the size of your paging file. Start up and run all the applications that you usually use at the same time. Load several documents and work files. Play around with them and check the peak value for the paging file.

Then play several of the most memory-intensive games you have. 3D games with large textures are good ones to test. At all times, record down the highest value for the paging file size that System Monitor reports.

Once you are done, select the highest value that has been recorded for the paging file size and round it up to the nearest 100MB. For example, if the biggest size your paging file ever went during the tests was 619MB, then 700MB is the ideal size for your paging file.

But always make sure you add at least 40-50MB as a cushion against future memory-guzzling applications or games. For example, if the largest size your paging file expanded to during your tests was 684MB, then 750MB would be an ideal size for your paging file.

How Much Virtual Memory Do I Need?

Finding Out In Windows XP

Finding your optimal paging file size in Windows XP is much easier.

Just give your system a clean boot. Once you are in Windows XP, run Task Manager . You can get to it by right-clicking on the taskbar and selecting Task Manager. You can also access it through the keyboard shortcut of Ctrl-Alt-Del.

After you load Task Manager, click on the Performance tab. You will see this screen :-

Now, you can monitor the size of your paging file. Start up and run all the applications that you usually use at the same time. Load several documents and work files. Play around with them and check the peak value for the paging file.

Then play several of the most memory-intensive games you have. 3D games with large textures are good ones to test. At all times, record down the highest value for the paging file size that System Monitor reports.

Once you are done, select the highest value that has been recorded for the paging file size and round it up to the nearest 100MB. For example, if the biggest size your paging file ever went during the tests was 619MB, then 700MB is the ideal size for your paging file.

But always make sure you add at least 40-50MB as a cushion against future memory-guzzling applications or games. For example, if the largest size your paging file expanded to during your tests was 684MB, then 750MB would be an ideal size for your paging file.

Moving The Paging File To A Different Partition

Another popular technique proposed by many tweakers suggests moving a temporary paging file from the default first partition to a separate, dedicated partition.

The reasons for this technique are ostensibly two-fold :-

+ to reduce fragmentation of the first partition
+ to ensure that the paging file will remain contiguous even though it is a temporary paging file

This idea looks good because it enables users of temporary paging files to keep their primary partition neat and the paging file contiguous for a speed boost.

However, many users of this technique failed to take into account several things. Let's see what they are.



Cylinders And Partitions

First of all, let's take a look at a hard disk cylinder. A cylinder consists of the same tracks on all the platters in the hard disk.

The first cylinder, nominally called cylinder 0, is coloured in bright green. It is the outer most cylinder and consists of the first track of all the platters in the hard disk. Such groups of tracks have a cylindrical look, hence the name. Cylinder n (in red) is the last cylinder of the hard disk, where n can be any integer.

Partitions are constructed using full cylinders. The first one starts at cylinder 0 and go out to where you specify. The next one starts on the following cylinder, and so on. If you try to create a partition with an end that falls in the middle of the cylinder, FDISK or similar utilities will round it up so that the partition occupies the entire cylinder, instead of a partial cylinder.

Needless to say, the first partition will always start with the first track of every platter. In other words, the first partition will always be the fastest partition in the hard disk, followed by the second partition and so on. Therefore, if you create a second partition and dump the paging file there, you will actually be moving it to a slower part of your hard disk!

As you can see, while the temporary paging file will be remain contiguous using this technique, the transfer of the paging file from the outer tracks to the inner tracks of the hard disk will inevitably reduce its performance.

Need More Reasons?

Creating a dedicated partition for the dynamic paging file also means tying up hard disk space and inviting inflexibility.

Users of FDISK will find it impossible to change the size of the paging file partition when they need to do so. In fact, they will have to remove at least two partitions to create a larger one. If they only have one primary and a secondary paging file partition, then they will have to remove both and recreate two new partitions.

Users of special utilities like Partition Magic will have an easier time as they can easily adjust the sizes of the partitions. But in the end, this method is counter-productive because for all your trouble, you have just slowed down your paging file and orphaned off a portion of your hard disk for the dedicated partition.

The main reason for using a temporary paging file is actually to save hard disk space. Users of a temporary paging file avoid tying up large amounts of hard disk space in a permanent swapfile.

However, this method actually requires you to set aside a large amount of hard disk space and worse, place cordon off this space in an inflexible partition. If you can afford to allocate space for this dedicated partition, you would be better off using the space for a permanent paging file.

In my opinion, this technique is a waste of time and needlessly endangers your data. Messing around with FDISK and partitions can be heartbreakingly exciting, if you catch my drift.



More Partitions = Data Parachute?

Some users advocate using multiple partitions for safety reasons. Their opinion is that in the event of a hard disk crash, corruption to the boot sector or FAT (File Allocation Tables), only the primary partition will be lost, leaving precious data safe in the other partitions.

Unfortunately, from my experience involving hard disk crashes, every partition was inevitably wiped out. When a hard disk head crashes with a platter, I seriously doubt it would politely avoid scoring through the media that has been allocated to other partitions.

Russ Johnson, a Product Support Engineer from Symantec Corporation has this to say, "It's not a substitute for a good backup, but it may save you from having to restore all of your data from a backup. However, if your first partition is taken out, more than likely the whole drive will be lost. The first partition is also the location of the Master Boot Record and the partition table."

Now, I agree that storing your data on a different partition is actually a good practice. It can save your data if the first partition gets corrupted due to a soft error. For example, even if the FAT of one of the partitions gets corrupted, data on the other partitions will still be safe.

So, if data integrity (as well as disk management) is important to you, you should consider using multiple partitions. However, this does not mean you should move the paging file to a different partition... oh no...

When the paging file is permanent, tweakers who advocate moving paging files around will tell you to move your partition to a second hard disk. Why?

As the theory goes, this allows your system to access both the paging file on the second hard disk and data on the first hard disk concurrently. This theoretically improves performance a lot! But does it really work?

Well, it depends.



Hard Disk, NOT Partition!

Many people get confused by drive letters. They assume that moving the paging file from drive C: to drive D: is the same as moving it to another hard disk. However, this is not true.

The operating system does not bother with physical drives. It is only interested in logical drives. By this, we mean properly-formatted partitions that can be accessed by the operating system.

To the operating system, partitions appear as physically-separate hard disks although they may reside on the same hard disk. If you partition your hard disk into three different partitions, your operating system will identify them as three logical drives (Drive C:, Drive D: and Drive E:). But they are still physically on the same hard disk!

Therefore, if you merely move the paging file to a different logical drive, you could be doing nothing more moving it to a different partition. So, please check and make sure you are moving it to a physically-separate hard disk. Preferably, it should be the first partition in that hard disk.

Parallel-ATA

Many tweakers forget one thing when they move their paging files to the second hard disk - only one PATA (Parallel-ATA) device can be active at any one time on the same IDE channel.

Most users slave the second hard disk to the first hard disk on the primary IDE channel and put the removable media drives (CD writers, DVD-ROM, etc.) on the secondary IDE channel. That is theoretically sound practice but it actually negates the purpose of moving the paging file off the primary hard disk!

Because both hard disks are on the same IDE channel, they can't be active at the same time. So, there is no way data can be read from both hard disks at the same time. In fact, because the secondary hard disk is often slower and smaller than the primary hard disk, the performance of the paging file on the second hard disk will actually be worse off.



So How Do We Make It Work?

The only way for this method to work is to put the first and second hard disks on separate IDE channels. That means the first hard disk gets hooked up to the primary IDE channel and second hard disk gets the secondary IDE channel. This allows both IDE channels can be active at the same time, delivering data from both hard disks concurrently.

In addition, the second hard disk needs to be at least half as fast as the primary hard disk. This allows the paging file on the second hard disk to be at least as fast as a paging file on the first hard disk. Otherwise, the performance advantage accessing the paging file concurrently on a second hard disk will be negated by the slower performance of the second hard disk.

Remember, if the first hard disk can serve data to and from both the application in use and the paging file faster than the second hard disk can access the paging file alone, then it is pointless to maintain a paging file on the second hard disk.

But if the second hard disk is more than half as fast as the first hard disk, then it would be advantageous to move the paging file there because the paging file can then be accessed concurrently with data on the first hard disk. In addition, the valuable outer tracks on the first hard disk will be freed up for use by the operating system.

Other Considerations

The trouble with such a setup is that most motherboards usually with only two IDE channels.

If you slave your DVD writer to the first hard disk (on the primary IDE channel), then you may have trouble writing data from the first hard disk to a DVD. This is because the IDE channel has to interleave its operations between the first hard disk and the DVD writer.

You won't have any trouble writing data from devices on the second IDE channel to the DVD writer though. This is because the DVD writer is on the first IDE channel and can thus be accessed concurrently with the devices on the second IDE channel.

However, if you slave your DVD writer to the second hard disk (on the secondary IDE channel), then you may have problems with games running off CDs or DVDs in that drive. Of course, this time you won't have any trouble writing data from devices on the first IDE channel to the DVD writer!

Either way, you will face performance compromises. It is a great idea but implementation is not quite as simple as you might think. The key to making this work is to be aware of such considerations and plan your setup accordingly.

But if your motherboard comes with enough IDE channels to give each device its own channel, then the way is clear - hook the second hard disk to a separate IDE channel and move the paging file there!

Serial-ATA

The beauty of Serial-ATA is that each device is given its own channel. This completely circumvents the issues that Parallel-ATA drives have with sharing the same IDE channel.

So, if you are only using Serial-ATA devices, you can immediately move the paging file to the second hard disk for a big speed boost. Just make sure you move it to a partition that is on the second hard disk, not the second partition in your first hard disk.

How Do I Move The Paging File In Windows 9x?

If you want to move your paging file to a different partition or drive, first open up System Properties, either through the Control Panel or by right-clicking on My Computer and selecting Properties.

Once in System Properties, click on the Performance tab and you will see the following picture :-

Right at the bottom, you'll see a Virtual Memory... button. Click on it to get the following screen :-

Select Let me specify my own virtual memory settings. This allows you to choose the logical drive in which you would like to place the paging file.

Click on the pull-down list. It will show you all the available partitions and hard disks in your system. Select the logical drive where you want the paging file to be.

Then set the minimum and maximum paging file sizes and click OK. After rebooting, your paging file will be established in the logical drive you selected.

Again, please remember that each logical drive represents a partition, not a physical drive. So, if you want to move your paging file to a separate hard disk, select a logical drive that resides on that hard disk. Preferably, it should be the first partition in the other hard disk (which should be on its own IDE channel).

How Do I Move The Paging File In Windows 2000?

First, open up System Properties, either through the Control Panel or by right-clicking on My Computer and selecting Properties.

Once in System Properties, click on the Advanced tab. There will be three options. Click on Performance Options... and you will see the following picture :-

The second section you see is titled Virtual Memory. Under it, there is a Change... button. Click on it to get the following screen :-

This is where you manage Windows 2000's paging file settings.

Just scroll through the selection of logical drives available. Click on the logical drive that you want to place the paging file. Then set the initial and maximum paging file sizes and click Set.

To remove the paging file from the default location in the first logical drive, select drive C: and set both initial and maximum sizes to 0 (zero). Then click Set.

After you are done, just click OK and allow Windows 2000 to reboot your computer. After rebooting, your paging file will be established in the logical drive you selected.

Again, please remember that each logical drive represents a partition, not a physical drive. So, if you want to move your paging file to a separate hard disk, select a logical drive that resides on that hard disk. Preferably, it should be the first partition in the other hard disk (which should be on its own IDE channel).

How Do I Move The Paging File In Windows XP?

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties.

Once in System Properties, click on the Advanced tab. There will be three sections.

Click on Settings in the Performance section and the Performance Options screen will pop up. Click on the Advanced tab and you'll see the following picture :-

The second section you see is titled Virtual memory. Under it, there's a Change button. Click on it to get the following screen :-

This is where you manage Windows XP's paging file settings.

You can select the logical drive you wish to place the paging file in by clicking on the list of logical drives shown on the screen.

Just scroll through the selection of logical drives available. Click on the logical drive that you want to place the paging file. Then set the initial and maximum paging file sizes and click Set.

To remove the paging file from the default location in the first logical drive, select drive C: and select No paging file. Then click Set.

After you are done, just click OK and allow Windows XP to reboot your computer. After rebooting, your paging file will be established in the logical drive you selected.

Again, please remember that each logical drive represents a partition, not a physical drive. So, if you want to move your paging file to a separate hard disk, select a logical drive that resides on that hard disk. Preferably, it should be the first partition in the other hard disk (which should be on its own IDE channel).

Multiple Hard Disks

With hard disk prices dropping and multiple hard disks becoming common, this introduces two interesting possibilities :-

+ multiple paging files
+ moving the paging file to a RAID array

Both methods appear to offer better paging file performance. But do they really offer better performance? Let's find out...



Multiple Paging Files

With multiple hard disks in the same system, you can actually split the paging file into multiple paging files!

Instead of just moving the paging file from one hard disk to another, you can actually place a paging file on each and every hard disk in the system. And if each hard disk has its own IDE channel, having multiple paging files will greatly increase its performance.

Because each hard disk with its own channel can be accessed concurrently with the other hard disks in the same system, multiple paging files will allow the computer to access all of them simultaneously. Needless to say, this greatly increases its read and write performance.

However, it is still recommended that you do not place the paging file in the primary hard disk. Leave the outer tracks there for the operating system to use. This will also free up the first hard disk for the operating system's use, instead of sharing it with the paging file.

So, if you have four hard disks in your system, you should create only three paging files. One in each of the other hard disks, leaving the primary boot hard disk without a paging file.

Creating Multiple Paging Files In Windows XP

First, open up System Properties, either through the Control Panel or by right-clicking on the My Computer icon and selecting Properties.

Once in System Properties, click on the Advanced tab. There will be three sections.

Click on Settings in the Performance section and the Performance Options screen will pop up. Click on the Advanced tab and you'll see the following picture :-

The second section you see is titled Virtual memory. Under it, there's a Change button. Click on it to get the following screen :-

This is where you manage Windows XP's paging file settings.

You can select the logical drive in you wish to create a paging file by clicking on the list of logical drives shown on the screen.

Just scroll through the selection of logical drives available. Click on the logical drive in which you want to create a paging file. Then set the initial and maximum paging file sizes and click Set. Do this for as many hard disks as you want in your system.

To remove the paging file from the default location in the first logical drive, select drive C: and select No paging file. Then click Set.

After you are done, just click OK and allow Windows XP to reboot your computer. After rebooting, multiple paging file will be created in the logical drives you selected.

Please remember that each logical drive represents a partition, not a physical drive. So, if you want to create a paging file in a separate hard disk, select a logical drive that resides on that hard disk. Preferably, it should be the first partition in the other hard disk (which should be on its own IDE channel).

Moving The Paging File To A RAID Array

Before proceeding, you should read our RAID Optimization Guide for a primer on the different RAID levels.



RAID 0

RAID 0 uses striping to achieve better performance. Putting the paging file on a RAID 0 array will greatly improve both its read and write performance because the paging file will be split up between the hard disks in the RAID 0 array.

Although RAID 0 does not offer any data redundancy, that is perfectly alright for the paging file since it is only used for the temporary storage of the system's memory contents.



RAID 1

A paging file on a RAID 1 array may benefit from a faster access time. But because the paging file has to be mirrored on a second hard disk, this greatly degrades the paging file's write performance. In addition, the paging file will not benefit from the additional data redundancy offered by RAID 1.

Therefore, it is recommended that you do not put your paging file in a RAID 1 array. You should place the paging file on a separate hard disk.



RAID 0+1

Although the paging file will benefit from the increased read performance in a RAID 0+1 array, its write performance will be as severely degraded as it would be in a RAID 1 array. In addition, the paging file will not benefit from the additional data redundancy offered by RAID 0+1.

Therefore, it is recommended that you do not put your paging file in a RAID 0+1 array. You should place the paging file on a separate hard disk.

Moving The Paging File To A RAID Array

RAID 5

RAID 5's distributed parity requires a lot of calculations. Moving the paging file to a RAID 5 array will greatly increase the amount of calculations the RAID controller has to do. This greatly reduces its performance. Again, the paging file will not benefit from the additional data redundancy offered by RAID 5.

Therefore, it is recommended that you do not put your paging file in a RAID 5 array. You should place the paging file on a separate hard disk.



JBOD

When the hard disks in a JBOD array, they act exactly like a single logical drive. Therefore, placing the paging file here will not incur any benefits at all. It will just be like putting it on the first hard disk.



Conclusion

Generally, placing the paging file in a RAID array is not recommended at all. The only situation where a RAID array can actually improve the paging file's performance is if the RAID array was created using RAID 0. Otherwise, avoid placing the paging file in a RAID array.



How To Place The Paging File In The RAID Array

To the operating system, the RAID array appears as an ordinary logical drive. Therefore, moving the paging file to the RAID array is as simple as moving it to another logical drive. Just follow the instructions for moving the paging file to a different logical drive.

Moving The Paging File To A RAM Drive

A RAM drive is nothing more than a logical drive created out of system memory.

To the operating system, a RAM appears as a normal logical drive, albeit a very fast one! Because it is created using system memory, a RAM drive is fast. VERY fast, in fact. As I mentioned earlier, even dual-channel PC2700 DDR memory is over 70X faster than the fastest hard disk.

That's why many people actually advocate moving the paging file to a RAM drive. Their reason is simple. Because a RAM drive is so fast, moving the paging file there will greatly improve its performance.

They are most certainly correct. Moving the paging file into a RAM drive will definitely give it an enormous boost in performance. However, that is really counter-productive. Let's see why.



From RAM To Hard Disk To RAM?

The purpose of a paging file is to create virtual memory for situations where there is not enough system memory. Virtual memory serves as an emergency source of additional memory, only to be used when there is not enough system memory.

The RAM drive, on the other hand, is used to create a very fast pool of temporary storage space in the system memory. It ties up system memory, so it is usually created when there is a lot of free system memory. Even then, it is usually kept small and only used to store temporary work files.

Therefore, does it make sense to use limited system memory to create a RAM drive that is used to service the paging file? No, it doesn't make sense at all.

Remember, most paging files are very large, a few hundred megabytes to a gigabyte in size. Most computers even come with enough memory to create such a large RAM drive, much less move the entire paging file to the RAM drive.

Even if you have a lot of system memory, creating a large RAM drive reduces the pool of available system memory. This increases the need for virtual memory which means more data will have to be paged out into the paging file. This increases the size of the paging file which will inevitably be much larger than the RAM drive.

At this point, Windows will automatically create more virtual memory via a dynamic paging file. Because a large portion of the system memory has already been taken up by the RAM drive, this will cause a lot of data to be paged out to a large dynamic paging file on the hard disk. That defeats the purpose of moving the paging file to the RAM drive - improved performance.

If your system has a lot of system memory, don't waste your time creating a RAM drive to service the paging file. If there is a lot of free system memory, Windows will not need to page out data to the paging file. That would produce the best results. Nothing is faster than running the programs directly in system memory.

Reducing Reliance On Virtual Memory

Windows can get too enthusiastic about paging data out to the paging file. This can lead to unnecessary paging, even when there is a lot of free system memory.

Luckily, we can do something about this :-

+ Enabling the Pagefile_Call_Async_Manager service (Windows 98, Me)
+ Stop NTExecutive from paging (Windows NT, 2000, XP and above)

We will take a look at each and see how they work.



Enabling The Pagefile_Call_Async_Manager Service

Microsoft added a Pagefile_Call_Async_Manager feature in Windows 98. This ostensibly forces Windows 98 to behave more like Windows 95 by asynchronously paging out data during periods of inactivity. According to Microsoft, this decreases performance. However, it is actually quite the opposite.

Enabling this feature actually forces Windows 98 to be more conservative about using the paging file. Windows 98 will reduce the amount of paging and keep more data in system memory. This improves performance by keeping more data in system memory than in the paging file.

Needless to say, it is recommended that you enable the Pagefile_Call_Async_Manager service. Just make sure your system has a good amount of system memory.

To enable the Pagefile_Call_Async_Manager service, you will have to edit the System.ini file (usually found in the drive:\\Windows\ folder.

Look for the [386Enh] section. In that section, add the following entry under the [386Enh] section :-

[386Enh]
ConservativeSwapfileUsage=1

Save the change you made to the System.ini file and reboot the computer. When Windows 98 boots up again, it will be using a more conservative approach to paging.

Incidentally, this method is said to work with Windows Me although I cannot confirm this. But please note that this method does not work in Windows NT, 2000 and XP.


Stop NTExecutive From Paging

In Windows NT, 2000 and XP, you can prevent pageable drivers and system code in the Windows NT Executive from being paged out to the paging file.

Normally, pageable drivers and system codes are paged out to the paging file to free up memory. Naturally, this reduces the performance of the operating system and affected drivers.

However, you can easily change that and force Windows to keep all drivers and system code in the system memory. But you will need to edit the registry.

Start up Registry Editor by running regedit.exe in the drive:\Windows\ folder or by going to Start Menu -> Run... -> regedit.exe.

Once you have opened up Registry Editor, go to the following subkey :-

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management

You will see the following screen :-

Look for the DisablePagingExecutive option. By default, it is set to 0.

Double-click on it and change its value to 1. Then close Registry Editor and reboot your computer.

When Windows boots up again, the pageable drivers and system code will no longer be paged out to the paging file. Instead, they will be retained in system memory for maximum performance.

This method works with Windows NT, 2000 and XP. It does not work with Windows 98 or Windows Me.

Conclusion

Optimizing the paging file isn't a very hard thing to do. The main problem is evaluating and selecting the best methods of optimization for your system.

The previous pages have discussed, at some length, the pros and cons of the different methods. By now, you should be able to see a pattern.

Evidently, creating a semi-permanent, contiguous paging file that is slightly larger than what you normally need and moving it to the outer tracks of the hard disk are generally the best ways to optimize the paging file. If you have multiple hard disks, creating multiple paging files will also greatly improve its performance. Needless to say, we should also force Windows to reduce its reliance on virtual memory.

But we should generally avoid placing the paging file in RAID arrays or a RAM drive. It also does not make sense to create a really massive paging file. Needless to say, it is counter-productive to simply move the paging file to another partition of the same hard disk.

I hope this guide has been of great help to you in optimizing the paging file. Let us know if you have any comments or perhaps new tips on further optimizing the virtual memory system!

Please feel free to take a look at our other guides and reviews or drop by our forums for a chat.
if u liked the tutorial plz comment.


Try this at your own risk although its been tested by the owner of the guide..
recommended is to have a backup of ur system while messing with it... if any thing goes wrong .. u can revert back to normal.

Read Users' Comments (1)comments