Thursday, March 15, 2012

Centos Network Installation Server


REMOTE INSTALLATION
from CENTOS6


Required Packages for Network Installation: -
  • xinetd: -Xinetd-managed applications (like tftp, telnet etc.) all store their configuration files in the /etc/xinetd.d/ directory.
  • tftp-server
  • dhcp
  • syslinux: - pxeboot package
  • nfs

Installation of Required Packages: -
  • xinetd: - yum install xinetd
  • tftp-server: - yum install tftp-server
  • dhcp: - yum install dhcp
  • syslinux: - yum install syslinux

Steps to setup tftpboot: -
  1.  # mkdir /tftpboot/
  2. Create the tftpboot directory under root directory (/)
    Now some changes are required to vi /etc/xinetd.d/tftp

server_args = -s /tftpboot

    After installing the syslinux package, pxelinux.0 file will be created under /usr/share/syslinux/ directory. This is required to load & install kernel and initrd images on the client machine.
Copying the pxelinux.0 image. PXE Linux image will be available once you installed the syslinux package. Copy this  
to /tftpboot path as shown below.
 
# cp /usr/share/syslinux/pxelinux.0 /tftpboot


    Extract the ISO image into a directory which is located into tftpboot.
    # mkdir /centos
    # mount -o loop centos-DVD-x86_64.iso /centos # cp -vr /centos /tftpboot/
    copy the vmlinuz & initrd into /tftpboot.
    # cp /centos/isolinux/ vmlinuz initrd.img /tfptboot

    Create the directory pxelinux.cfg under /tftpboot and define the pxe boot definitions for the client.
# mkdir /tftpboot/pxelinux.cfg
# vi /tftpboot/pxelinux.cfg/default
 

default centos
label centos
kernel vmlinuz
append initrd=initrd.img showopts=nfs
install = nfs://192.168.56.1/tftpboot/centos/

save this file using the command (:x, :wq).
default: - this option specify the default os that is to be install on remote machine. Default option should be same as label.
Label: - here you can specify the label of OS that you are going to install.
Kernel: - here u have to specify the kernel name which you copy in /tftpboot directory.
Showopts (show option): -Through which u share the centos image.
install – specifies boot arguments to pass to the install kernel.



    DHCP server setup: -
    add the following lines in /etc/dhcpd/dhcpd.conf
    Allow booting;
    allow bootp;
    filename “pxelinux.0”
    next-server 192.168.56.1; #specify here server static ip address.

    NFS server setup: -
    vi /etc/exports
/data/packages/centos6 *(rw) #path of to be shared iso extracted image

    restart xinetd, dhcpd, nfs services: -
# service xinetd restart
# service nfs restart
# service dhcpd restart


    SETTING ON CLIENT SIDE: -
    open BIOS on client machine and then set boot device priority to NETWORK.
save it and reboot the system.
    In this way you can install Linux OS on remote machine.
    author
    pardeep taya
    email- bk_pardeep@yahoo.co.in

Wednesday, February 1, 2012

One more day when I feel shame to be an Engineer.

No use of boasting about our Engineering, Engineering institute when we make big deals (millions of dollars) for the defense equipment. Read this http://timesofindia.indiatimes.com/india/French-jet-Rafale-bags-20bn-IAF-fighter-order-India-briefs-losing-European-countries/articleshow/11706551.cms.

I think we are leading engineers in the world only on paper like our Indian Cricket Team. There is talent, hardwork and funding too, then why we don't work on R&D. Why we have become a man power instead of brain power ???  

Saturday, January 21, 2012

Hyper-Threading and Multi-Core


Hyper-Threading and Multi-Core

Threads

Consider the problem of cooking for a big dinner party. Each dish has its own recipe. You could follow the instructions in one recipe until that one dish is done, then set it aside and start the next dish. Unfortunately, it would take several days to cook the dinner, and everything would come out cold. Fortunately, there are long periods of time when something sits in the oven, and while it is cooking you can prepare one or two other things.
A sequence of instructions to do one thing is called a “recipe” in the kitchen, and a “thread” in computer programming. A computer user intuitively understands the behavior of threads when running several programs on the screen, or when listening to an MP3 file in the background while typing a letter into the word processor. Even a single program can make use of threads. The Browsers has separate threads for every file or image you are downloading, and it may assign a separate thread to decode each image or banner ad that appears on the screen when you visit the New York Times web site.
Some short operations have a very high priority. For example, a pot of rice you just started has to be checked every 30 seconds of so to see if it has come to a full boil. At that point the heat can be turned down, the pot can be covered, and now you can forget it for 15 minutes. However, if you don’t check it regularly at first, it will boil over, make a mess on the stove, and you have to start over.
Computer programs also assign a priority to their threads. As with cooking, high priority can only be assigned to trivial tasks that can be accomplished in almost no time at all. Just as a kitchen has to have timers, and a beep when the microwave is done, so the operating system has to have support for program threads and the ability to connect them to timers and to events signaled when data arrives from the network or another device.
In the kitchen, each task you perform has its own set of tools. To chop carrots, you need a knife and a cutting board. To take something from the oven, you need oven mittens. It takes some small amount of time to set down what you are doing and change. If you don’t change, you will find it is very difficult to cut carrots while wearing oven mittens.
Each thread in the computer stores its status and data in the CPU chip. To switch threads, the operating system has to take this data out of the CPU, store it away, and load up data for the other thread. Switching from one thread to another takes a few hundred instructions, but this is not a problem when the CPU can execute billions of instructions a second while a hard drive or network performs only about 30 operations per second. The overhead of thread switching for I/O is trivial.
If it is a big complicated dinner that one person can simply not get done in time, you need some help. Specific tasks can be assigned to different people. The threads don’t change. The bread is still cooked the same way whether there is one person in the kitchen or two. With two people, however, one can chop carrots while the other peels potatoes.
Modern operating systems support computers with more than one CPU chip. The system assigns one thread to run one one CPU, and another thread to run on the next CPU. The two threads run concurrently. However, such systems are expensive and are typically found only in big servers or engineering workstations. Desktop and laptop computers have come with only one CPU.

Hyper-Threading

As has already been noted, memory delay has become an important problem for computer performance. When an instruction requires data that is in second level cache, it may have to wait a cycle or two. During this time, the CPU will look for other instructions that do not depend on the result of the blocked instruction and execute them out of order. However, out of order execution is at best good for a dozen instructions. When an instruction needs data from DDR DRAM, it will be blocked for a length of time during which the CPU could have run hundreds of instructions.
In 2004, Intel tried to address this memory delay problem with a trick called Hyper-Threading. Rather than duplicate the entire circuitry of a CPU, a Hyperthreading processor simply duplicates the registers that hold all the data that the OS would have to remove from the CPU in order to run a different thread. The OS thinks that there are two CPUs and it assigns two different threads to them. All the registers and data needed to run each thread are loaded into the same CPU chip at the same time.
When both threads are able to run at full speed, the CPU spends half its time running instructions for each thread. Unlike the OS, the CPU doesn't have a view of "priority" and cannot favor one thread because it is more important. However, if one thread becomes blocked because it is waiting for data from the very slow main memory, then the CPU can apply all of its resources to executing instructions for the other thread. Only when both threads are simultaneously blocked waiting for data from memory does the CPU become idle.

Multi-Core

Moore's Law says that every 18 months the number of circuits on a chip can double. About one Moore Generation after Intel introduced Hyperthreading both Intel and AMD decided to spend the extra transistors to take the next step and create two real CPUs in the same chip.
It has always been possible to do this in any 18 month cycle. However, vendors previously decided to use the transistors to make the single CPU run faster, by supporting out of order execution and register renaming.
A Server tends to assign a thread to each incoming user request. Generally all network users are of equal priority, so threading is an obvious choice for Server software. However, desktop users tend to do one primary thing at a time. If you are running a low intensity job like Word or Web Browsing, CPU doesn't matter. However, playing video games, retouching photographs, compressing TV programs, and a few other consumer programs will use a lot of one CPU, and making the one CPU run faster seemed more important.
Engineers ran out of ideas for using transistors to make a single program run faster. So starting last year they started building "dual core" chips with two CPUs. That forced some of the software vendors, particularly the video game makers, to redesign their software to make better use of the second processor.
Two CPUs can do twice as much work as one CPU if you can keep both processors busy all the time. Unfortunately, that is not realistic. Even on a server, the value of each subsequent processor goes down, and on a desktop there just isn't enough work to distribute it uniformly. So while Intel is beginning to show of a Core 2 Quadro chip with four CPUs, it makes little sense to go farther than that.

Heat and Power

Computers are idle a lot of the time. When they are running, there is often only work to keep one core busy. The easy thing to do would be to design a dual core machine where both processors run all the time. Such a system will generate twice as much heat and use twice as much energy. Intel and AMD rushed their first generation of dual core processors out the door, so this is how they operate.
Given more time to do the engineering, you can design multi-core systems to shut down parts of the chip that are not being used. This is critical in a laptop system running on battery, but in today's heat and power conscious environment it is useful for even desktop machines.

Co(re)ordination

Two programs are running on your computer. While they mostly do different things, they may both store data on the same disk and they both display output on the same screen. Internally, the operating system must coordinate their concurrent access to shared resources. At the hardware level, each CPU core must coordinate access to memory and to the I/O devices.
In the old days when Intel had one CPU per chip, coordination between the processors was done by the Northbridge chip on the mainboard. That was a perfectly sensible design. However, when Intel moved to Core Duo, and started to put two CPUs in the same chip, it was no left with the unfortunate consequence that the two CPUs could not talk directly to each other or coordinate activity, but instead they had to go out to the Northbridge chip for every such request.
When AMD came up with the Athlon 64/Opteron design they moved memory management into the CPU. That eliminated the need for a Northbridge chip. Processors where connected to the Southbridge (and thus all I/O devices) and to other processors using HyperTransport links. Each AMD CPU has one CPU, a memory manager, and 1 to 3 HyperTransport managers. AMD connected these five components to each other with a general purpose switch called the crossbar or "XBar" for short. At the time, they may not have given much thought to multiple CPU cores, but this turned out to be an ideal design.
Inside the AMD chip, a CPU that needs data from memory, an I/O device, or another CPU makes a request to the XBar. The XBar determines if the requested data is local (another CPU on the same chip, memory controlled by this chip) or remote (another chip connected by a HyperTransport link).
The use of the XBar to connect devices in the same chip, and the HyperTransport link to connect to external devices, creates a design that is efficient, scalable, and flexible. Recently AMD purchased ATI, a leading maker of the Graphics Processing Unit on a video card. This architecture will let them explore hybrid chips that contain a CPU to run programs and a GPU to handle video, both part of the same chip. Alternate designs combine CPU and some of the Southbridge function to produce ultra cheap or ultra small boards.

googled by
Ashwani Gupta



Wednesday, December 21, 2011

Convert Default Online Dictionary to Offline in Debian/Ubuntu


Step 1: Type the following command in the gnome-terminal to obtain the dictionary server:

                sudo apt-get install dictd
Step 2:
 Then to obtain the dictionary definitions in english, i.e. word materials, paste the following command in terminal:
          
                sudo apt-get install dict-gcide


Step 3: To add the thesaurus(optional) in the dictionary paste the following command in the terminal:
        
        sudo apt-get install dict-moby-thesaurus


Converting to local host for Offline Dictionary Access


Step 1. go to the Edit → Preferences
Step 2. Now you have the default dictionary selected. Click the Add button to add the offline dictionary functionality.
Now in the Description add the any name you want to fill. Here it is filled with the “Local Server” and the most importantly fill the Hostname with“127.0.0.1” as shown.
Step 3: Now, all you need is to select the the newly added server (named Local Server here) and then your dictionary is up and running with that of smart thesaurus.

That’s it. We have just made our dictionary good enough, to work even without an internet connection. You can now search for word meanings locally.

Tuesday, December 6, 2011

OPEN SOURCE CANCER REASERCH

When it comes to treating, curing, and preventing cancer, modern medicine has largely failed. You could argue that cancer is far too complicated to unravel in the few millenia we have been documenting it. Or that the billions we spend annually on research is far too little. Established incentives and policies that perpetuate research silos certainly seem to slow success.
Medical researchers have been trained in a professional culture where secrecy reigns, where they must protect their own interests. The dominant culture discourages sharing research findings and collaborating on projects. It has become more important to protect vested interests than to take advantage of the huge collaborative network that is available in academia.

This mode of thinking is a bitter pill to swallow for the quarter of our population that will die of cancer. According to the World Health Organization, one in every four deaths is attributable to cancer.

What would happen if cancer researchers were able to adopt an open and collaborative approach like the one that has--for the last two decades--revolutionized software development? What if cancer research could be open source?

Linux has been successful because a large group of people recognized a need and agreed on a process for meeting that need. The brilliance of the open source approach is in the sheer amount of brainpower participating. The open source community shows that the collective intelligence of a network is greater than any single contributor.

While the term is attributed to software development, the idea is not. In fact, some medical research does use this methodology in the same way that Linus Torvalds and others develop open source operating systems. The Human Genome Project, for example, very successfully distributed gene-mapping in efforts to speed up the sequencing of the genome. The HGP teams published their data openly, on the Internet.

More recently, a team of Harvard researchers discovered the power of distributed research. A team led by Jay Bradner at the Dana Farber Cancer Institute discovered a small-molecule inhibitor that showed promise in its ability to interrupt the aggressive growth of cancer cells. The small-molecule inhibitor, called JQ1--after Jun Qi, the chemist who made the discovery--works by suppressing a protein (bromodomain-containing 4, or Brd4) necessary for the expression of the Myc regulator gene. It is a mutated Myc gene that is believed to be at the root of many cancers. Without Brd4, Myc remains inactive. Inhibiting Myc could be part of the key to successful cancer treatments.

With the cells from an affected patient, Bradner's group successfully grew the cancer in mice and discovered that the mice with the cancer who received the compound lived, while the mice with the cancer who didn't receive the compound rapidly perished.

Instead of operating in secrecy and guarding their work, Bradner's group shared it. They simply started mailing it to friends. They sent it to Oxford crystallographers, who sent back an informative picture that helped Dr. Bradners team to understand better how the small-molecule inhibitor works so potently against Brd4.

They mailed samples to 40 labs in the US and 30 more in Europe, encouraging these labs to use it, build upon it, and share their findings in return. As a result of this open source approach, Dr Bradner's team has learned--in less than a year--that JQ1 small-molecule inhibitor prevents the growth of leukemia, making affected cells behave like normal white blood cells. Another group reported back that multiple myeloma cells respond dramatically to JQ1.  Still another found that the inhibitor prevents adipose cells from storing fat, thus preventing fatty liver disease.

Bradner has published his findings. He has released the chemical identity of the compound, told researchers how to make it, and even offered to provide free samples to anyone in the medical research community. (If you're a researcher who'd like a sample of the JQ1 molecule, you can even contact Bradner's Lab via twitter @jaybradner.)

Bradner feels his early successes are due not only to the science, but also to the strategy. Using an open source approach, sharing the information about this molecule, and crowd-sourcing the research and the testing illustrates the opportunities that an open methodology can bring to the difficult challenges of medical research and prototype drug discovery.
In his recently released TED talk video, Dr. Bradner explains that he firmly believes that making a drug prototype freely available among researchers will help accelerate the delivery of effective cancer drugs to affected patients.

With more practice—and more familiarity with each other and this kind of collaborative research—scientists can break large, complex, time-expensive projects into smaller, achievable portions. By  spreading out those small tasks among many groups, much more work can be accomplished in a vastly reduced amount of time.

Using the old research models, Bradner’s team might have learned that JQ1 affects AML cells in the first year. But it might have been next year before they got to leukemia, and years after that before they realized it also could affect fatty liver. How many years do you think the old approach adds to the development of drugs we need today?

It is time to seriously consider a different model for scientific research–one that directly engages and benefits society, encourages open access and the free exchange of scientific information. The benefit to patients would be enormous.

Googled by
Dinesh Kamboj