Linux to Linux Key Based SSH

February 19, 2008 at 8:00 pm | Posted in Uncategorized | 1 Comment

Because OpenSSH allows you to run commands on remote systems, showing you the results directly, as well as just logging in to systems it’s ideal for automating common tasks with shellscripts and cronjobs. One thing that you probably won’t want is to do though is store the remote system’s password in the script. Instead you’ll want to setup SSH so that you can login securely without having to give a password.

Thankfully this is very straightforward, with the use of public keys.

To enable the remote login you create a pair of keys, one of which you simply append to a file upon the remote system. When this is done you’ll then be able to login without being prompted for a password – and this also includes any cronjobs you have setup to run.

If you don’t already have a keypair generated you’ll first of all need to create one.

If you do have a keypair handy already you can keep using that, by default the keys will be stored in one of the following pair of files:

  • ~/.ssh/identity and ~/.ssh/identity.pub
    • (This is an older DSA key).
  • ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub
    • (This is a newer RSA key).

If you have neither of the two files then you should generate one. The DSA-style keys are older ones, and should probably be ignored in favour of the newer RSA keytypes (unless you’re looking at connecting to an outdated installation of OpenSSH). We’ll use the RSA keytype in the following example.

To generate a new keypair you run the following command:

skx@lappy:~$ ssh-keygen -t rsa

This will prompt you for a location to save the keys, and a pass-phrase:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/skx/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/skx/.ssh/id_rsa.
Your public key has been saved in /home/skx/.ssh/id_rsa.pub.

If you accept the defaults you’ll have a pair of files created, as shown above, with no passphrase. This means that the key files can be used as they are, without being “unlocked” with a password first. If you’re wishing to automate things this is what you want.

Now that you have a pair of keyfiles generated, or pre-existing, you need to append the contents of the .pub file to the correct location on the remote server.

Assuming that you wish to login to the machine called mystery from your current host with the id_rsa and id_rsa.pub files you’ve just generated you should run the following command:

ssh-copy-id -i ~/.ssh/id_rsa.pub username@mystery

This will prompt you for the login password for the host, then copy the keyfile for you, creating the correct directory and fixing the permissions as necessary.

The contents of the keyfile will be appended to the file ~/.ssh/authorized_keys2 for RSA keys, and ~/.ssh/authorised_keys for the older DSA key types.

Once this has been done you should be able to login remotely, and run commands, without being prompted for a password:

skx@lappy:~$ ssh mystery uptime
 09:52:50 up 96 days, 13:45,  0 users,  load average: 0.00, 0.00, 0.00

What if it doesn’t work?There are three common problems when setting up passwordless logins:

  • The remote SSH server hasn’t been setup to allow public key authentication.
  • File permissions cause problems.
  • Your keytype isn’t supported.

Each of these problems is easily fixable, although the first will require you have root privileges upon the remote host.

If the remote server doesn’t allow public key based logins you will need to updated the SSH configuration. To do this edit the file /etc/sshd/sshd_config with your favourite text editor.

You will need to uncomment, or add, the following two lines:

RSAAuthentication yes
PubkeyAuthentication yes

Once that’s been done you can restart the SSH server – don’t worry this won’t kill existing sessions:

/etc/init.d/ssh restart

File permission problems should be simple to fix. Upon the remote machine your .ssh file must not be writable to any other user – for obvious reasons. (If it’s writable to another user they could add their own keys to it, and login to your account without your password!).

If this is your problem you will see a message similar to the following upon the remote machine, in the file /var/log/auth:

Jun  3 10:23:57 localhost sshd[18461]: Authentication refused: 
 bad ownership or modes for directory /home/skx/.ssh

To fix this error you need to login to the machine (with your password!) and run the following command:

cd
chmod 700 .ssh

Finally if you’re logging into an older system which has an older version of OpenSSH installed upon it which you cannot immediately upgrade you might discover that RSA files are not supported.

In this case use a DSA key instead – by generating one:

ssh-keygen

Then appending it to the file ~/.ssh/authorized_keys on the remote machine – or using the ssh-copy-id command we showed earlier.

Note if you’ve got a system running an older version of OpenSSH you should upgrade it unless you have a very good reason not to. There are known security issues in several older releases. Even if the machine isn’t connected to the public internet, and it’s only available “internally” you should fix it.


Instead of using authorized_keys/authorized_keys2 you could also achieve a very similar effect with the use of the ssh-agent command, although this isn’t so friendly for scripting commands.

This program allows you to type in the passphrase for any of your private keys when you login, then keep all the keys in memory, so you don’t have password-less keys upon your disk and still gain the benefits of reduced password usage.

If you’re interested read the documentation by running:

man ssh-agent

Discover the possibilities of the /proc folder

February 15, 2008 at 4:47 pm | Posted in Uncategorized | Leave a comment
The /proc directory is a strange beast. It doesn’t really exist, yet you can explore it. Its zero-length files are neither binary nor text, yet you can examine and display them. This special directory holds all the details about your Linux system, including its kernel, processes, and configuration parameters. By studying the /proc directory, you can learn how Linux commands work, and you can even do some administrative tasks.

Under Linux, everything is managed as a file; even devices are accessed as files (in the /dev directory). Although you might think that “normal” files are either text or binary (or possibly device or pipe files), the /proc directory contains a stranger type: virtual files. These files are listed, but don’t actually exist on disk; the operating system creates them on the fly if you try to read them.

Most virtual files always have a current timestamp, which indicates that they are constantly being kept up to date. The /proc directory itself is created every time you boot your box. You need to work as root to be able to examine the whole directory; some of the files (such as the process-related ones) are owned by the user who launched it. Although almost all the files are read-only, a few writable ones (notably in /proc/sys) allow you to change kernel parameters. (Of course, you must be careful if you do this.)

/proc directory organization

The /proc directory is organized in virtual directories and subdirectories, and it groups files by similar topic. Working as root, the ls /code/* command brings up something like this:

1 2432 3340 3715 3762 5441 815 devices modules 129 2474 3358 3716 3764 5445 acpi diskstats mounts 1290 248 3413 3717 3812 5459 asound dma mtrr 133 2486 3435 3718 3813 5479 bus execdomains partitions 1420 2489 3439 3728 3814 557 dri fb self 165 276 3450 3731 39 5842 driver filesystems slabinfo 166 280 36 3733 3973 5854 fs interrupts splash 2 2812 3602 3734 4 6 ide iomem stat 2267 3 3603 3735 40 6381 irq ioports swaps 2268 326 3614 3737 4083 6558 net kallsyms sysrq-trigger 2282 327 3696 3739 4868 6561 scsi kcore timer_list 2285 3284 3697 3742 4873 6961 sys keys timer_stats 2295 329 3700 3744 4878 7206 sysvipc key-users uptime 2335 3295 3701 3745 5 7207 tty kmsg version 2400 330 3706 3747 5109 7222 buddyinfo loadavg vmcore 2401 3318 3709 3749 5112 7225 cmdline locks vmstat 2427 3329 3710 3751 541 7244 config.gz meminfo zoneinfo 2428 3336 3714 3753 5440 752 cpuinfo misc
The numbered directories (more on them later) correspond to each running process; a special self symlink points to the current process. Some virtual files provide hardware information, such as /proc/cpuinfo, /proc/meminfo, and /proc/interrupts. Others give file-related info, such as /proc/filesystems or /proc/partitions. The files under /proc/sys are related to kernel configuration parameters, as we'll see. The cat /proc/meminfo command might bring up something like this:
# cat /proc/meminfo MemTotal: 483488 kB MemFree: 9348 kB Buffers: 6796 kB Cached: 168292 kB ...several lines snipped...

If you try the top or free commands, you
might recognize some of these numbers. In fact, several well-known
utilities access the /proc directory to get their information. For
example, if you want to know what kernel you're running, you might try uname -srv, or go to the source and type cat /proc/version. Some other interesting files include:

  • /proc/apm: Provides information on Advanced Power Management, if it's installed.
  • /proc/acpi: A similar directory that offers plenty of data on the more modern Advanced Configuration and Power Interface. For example, to see if your laptop is connected to the AC power, you can use cat /proc/acpi/ac_adapter/AC/state to get either "on line" or "off line."
  • /proc/cmdline: Shows the parameters that were passed to the kernel at boot time. In my case, it contains root=/dev/disk/by-id/scsi-SATA_FUJITSU_MHS2040_NLA5T3314DW3-part3 vga=0x317 resume=/dev/sda2 splash=silent PROFILE=QuintaWiFi,
    which tells me which partition is the root of the filesystem, which VGA
    mode to use, and more. The last parameter has to do with openSUSE's System Configuration Profile Management.
  • /proc/cpuinfo: Provides data on the processor of your box. For example, in my laptop, cat /proc/cpuinfo gets me a listing that starts with:
  • processor : 0 vendor_id : AuthenticAMD cpu family : 6 model : 8 model name : Mobile AMD Athlon(tm) XP 2200+ stepping : 1 cpu MHz : 927.549 cache size : 256 KB

    This shows that I have only one processor, numbered 0, of the 80686 family (the 6 in cpu family goes as the middle digit): an AMD Athlon XP, running at less than 1GHz.

  • /proc/loadav: A related file that shows the average load
    on the processor; its information includes CPU usage in the last
    minute, last five minutes, and last 10 minutes, as well as the number
    of currently running processes.
  • /proc/stat: Also gives statistics, but goes back to the last boot.
  • /proc/uptime: A short file that has only two numbers: how many seconds your box has been up, and how many seconds it has been idle.
  • /proc/devices: Displays all currently configured and loaded character and block devices. /proc/ide and /proc/scsi provide data on IDE and SCSI devices.
  • /proc/ioports: Shows you information about the regions used for I/O communication with those devices.
  • /proc/dma: Shows the Direct Memory Access channels in use.
  • /proc/filesystems: Shows which filesystem types are supported by your kernel. A portion of this file might look like this:
  • nodev sysfs nodev rootfs nodev bdev nodev proc nodev cpuset ...some lines snipped... nodev ramfs nodev hugetlbfs nodev mqueue ext3 nodev usbfs ext2 nodev autofs

    The first column shows whether the filesystem is mounted on a block device. In my case, I have partitions configured with ext2 and ext3 mounted.

  • /proc/mounts: Shows all the mounts used by your machine
    (its output looks much like /etc/mtab). Similarly, /proc/partitions and
    /proc/swaps show all partitions and swap space.
  • /proc/fs: If you're exporting filesystems with NFS,
    this directory has among its many subdirectories and files
    /proc/fs/nfsd/exports, which shows the file system that are being
    shared and their permissions.
  • /proc/net: You can't beat this for network information.
    Describing each file in this directory would require too much space,
    but it includes /dev (each network device), several iptables (firewall) related files, net and socket statistics, wireless information, and more.

There are also several RAM-related files. I've already mentioned
/proc/meminfo, but you've also got /proc/iomem, which shows you how RAM
memory is used in your box, and /proc/kcore, which represents the
physical RAM of your box. Unlike most other virtual files, /proc/kcore
shows a size that's equal to your RAM plus a small overhead. (Don't try
to cat this file, because its contents are binary and
will mess up your screen.) Finally, there are many hardware-related
files and directories, such as /proc/interrupts and /proc/irq,
/proc/pci (all PCI devices), /proc/bus, and so on, but they include
very specific information, which most users won't need.

What's in a process?

As I said, the numerical named directories represent all running
processes. When a process ends, its /proc directory disappears
automatically. If you check any of these directories while they exist,
you will find plenty of files, such as:

attr cpuset fdinfo mountstats stat auxv cwd loginuid oom_adj statm clear_refs environ maps oom_score status cmdline exe mem root task coredump_filter fd mounts smaps wchan

Let's take a look at the principal files:

  • cmdline: Contains the command that started the process, with all its parameters.
  • cwd: A symlink to the current working directory (CWD) for
    the process; exe links to the process executable, and root links to its
    root directory.
  • environ: Shows all environment variables for the process.
  • fd: Contains all file descriptors for a process, showing which files or devices it is using.
  • maps, statm, and mem: Deal with the memory in use by the process.
  • stat and status: Provide information about the status of the process, but the latter is far clearer than the former.

These files provide several script programming challenges. For example, if you want to hunt for zombie processes,
you could scan all numbered directories and check whether "(Z) Zombie"
appears in the /status file. I once needed to check whether a certain
program was running; I did a scan and looked at the /cmdline files
instead, searching for the desired string. (You can also do this by
working with the output of the ps command, but that's not the point here.) And if you want to program a better-looking top, all the needed information is right at your fingertips.

Tweaking the system: /proc/sys

/proc/sys not only provides information about the system, it also
allows you to change kernel parameters on the fly, and enable or
disable features. (Of course, this could prove harmful to your system
-- consider yourself warned!)

To determine whether you can configure a file or if it's just read-only, use ls -ld; if a file has the "W" attribute, it means you may use it to configure the kernel somehow. For example, ls -ld /proc/kernel/* starts like this:

dr-xr-xr-x 0 root root 0 2008-01-26 00:49 pty dr-xr-xr-x 0 root root 0 2008-01-26 00:49 random -rw-r--r-- 1 root root 0 2008-01-26 00:49 acct -rw-r--r-- 1 root root 0 2008-01-26 00:49 acpi_video_flags -rw-r--r-- 1 root root 0 2008-01-26 00:49 audit_argv_kb -r--r--r-- 1 root root 0 2008-01-26 00:49 bootloader_type -rw------- 1 root root 0 2008-01-26 00:49 cad_pid -rw------- 1 root root 0 2008-01-26 00:49 cap-bound

You can see that bootloader_type isn't meant to be changed, but other files are. To change a file, use something like echo 10 >/proc/sys/vm/swappiness. This particular example would allow you to tune the virtual memory
paging performance. By the way, these changes are only temporary, and
their effects will disappear when you reboot your system; use sysctl and the /etc/sysctl.conf file to effect more permanent changes.

Let's take a high-level look at the /proc/sys directories:

  • debug: Has (surprise!) debugging information. This is good if you're into kernel development.
  • dev: Provides parameters for specific devices on your system; for example, check the /dev/cdrom directory.
  • fs: Offers data on every possible aspect of the filesystem.
  • kernel: Lets you affect the kernel configuration and operation directly.
  • net: Lets you control network-related matters. Be careful, because messing with this can make you lose connectivity!
  • vm: Deals with the VM subsystem.

Conclusion

The /proc special directory provides full detailed information about
the inner workings of Linux and lets you fine-tune many aspects of its
configuration. If you spend some time learning all the possibilities of
this directory, you'll be able to get a more perfect Linux box. And
isn't that something we all want?

Installing and Configuring GNUMP3d, The Streaming MP3/OGG Server

February 14, 2008 at 2:02 pm | Posted in Uncategorized | Leave a comment

Sharing something with friends or anyone through network (LAN or WAN) is really a great thing. And one of the best thing to share is music. Yeah, if you have a great collection of digital music stored in your hard disk, it’s time to share it with another people. Never say that you don’t have any idea how to share your music no.. no.. no.., wake up! Remember that we live in the world of free and open source software, we have many choices and the most important it’s free. What? You’re still using that fool proprietary thing, oh.. come on.

First Thing First

Although there are many choices out there, we’ll try to install and configure GNUMP3d. You may ask, why? I’ve tried it, and I thought it’s nice, cool, great, secure, easy to use, free, and it’s included in Ubuntu 7.10 repositories, that’s why I want to share my experience while installing, configuring, and using it. You won’t believe if you don’t try it.

For your information, I run GNUMP3d on Ubuntu 7.10 and Apache2 web server. First of all, you have to make sure that Apache web server has been installed on your system, if it has not it’s time to

$ sudo apt-get install apache2

Easy, right? Now let’s move on.

Installing GNUMP3d

All you have to do is just to open a terminal emulator (xterm, gnome-terminal, konsole, etc) and issue this command

$ sudo apt-get install gnump3d

and Ubuntu will do the rest. If you dislike the command line interface, you may choose Synaptic with a nice GUI.

Configuring GNUMP3d

The configuration of this streaming mp3/ogg server is stored in /etc/gnump3d, so let’s go there

$ cd /etc/gnump3d

you’ll see three files here, for now just pay attention to the main configuration file, gnump3d.conf. We need to edit this file, but don’t forget to back it up first.

$ sudo cp gnump3d.conf gnump3d.conf_original
$ sudo vim /etc/gnump3d/gnump3d.conf

The first thing you need to change is the root directory of this server, find this line

root = /var/music

change the value to the directory where you stored your music,

root = /media/multimedia/musik

Then, you need to change the user who runs this server, find this line

user = gnump3d

change the value, the result will be like this

user = root

There are many settings you may change in this file, just read the explanation there to understand it. For more information see the manual page of gnump3d.conf. Now, save your configuration and restart the server.

$ sudo /etc/init.d/gnump3d restart

Trying The Server

Everything is ready, now it’s time to try your new streaming mp3/ogg server. Open your favorite web browser (e.g Mozilla Firefox, Opera, Konqueror, etc) and type http://localhost:8888, then hit enter. If everything goes well you’ll the main page of your server ready to serve your network.

Okay, that’s all for today. Please do not hesitate to correct me if something’s wrong, either the grammar (I’m learning English, still not good yet) or anything else.

Installing and Configuring GNUMP3d, The Streaming MP3/OGG Server

Make the Windows Key on your Keyboard open KMenu in KDE

January 31, 2008 at 8:49 pm | Posted in Uncategorized | Leave a comment

Please See: Make the Windows Key on your Keyboard open KMenu in KDE

Learn 10 good UNIX usage habits

January 25, 2008 at 1:12 pm | Posted in Uncategorized | Leave a comment

Adopt 10 good habits that improve your UNIX® command line efficiency — and break away from bad usage patterns in the process. This article takes you step-by-step through several good, but too often neglected, techniques for command-line operations. Learn about common errors and how to overcome them, so you can learn exactly why these UNIX habits are worth picking up.

Introduction

When you use a system often, you tend to fall into set usage patterns. Sometimes, you do not start the habit of doing things in the best possible way. Sometimes, you even pick up bad practices that lead to clutter and clumsiness. One of the best ways to correct such inadequacies is to conscientiously pick up good habits that counteract them. This article suggests 10 UNIX command-line habits worth picking up — good habits that help you break many common usage foibles and make you more productive at the command line in the process. Each habit is described in more detail following the list of good habits.

Adopt 10 good habits
Make directory trees in a single swipe

Listing 1 illustrates one of the most common bad UNIX habits around: defining directory trees one at a time.
Listing 1. Example of bad habit #1: Defining directory trees individually

~ $ mkdir tmp
~ $ cd tmp
~/tmp $ mkdir a
~/tmp $ cd a
~/tmp/a $ mkdir b
~/tmp/a $ cd b
~/tmp/a/b/ $ mkdir c
~/tmp/a/b/ $ cd c
~/tmp/a/b/c $

It is so much quicker to use the -p option to mkdir and make all parent directories along with their children in a single command. But even administrators who know about this option are still caught stepping through the subdirectories as they make them on the command line. It is worth your time to conscientiously pick up the good habit:
Listing 2. Example of good habit #1: Defining directory trees with one command

~ $ mkdir -p tmp/a/b/c

You can use this option to make entire complex directory trees, which are great to use inside scripts; not just simple hierarchies. For example:
Listing 3. Another example of good habit #1: Defining complex directory trees with one command

~ $ mkdir -p project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat/a}

In the past, the only excuse to define directories individually was that your mkdir implementation did not support this option, but this is no longer true on most systems. IBM, AIX®, mkdir, GNU mkdir, and others that conform to the Single UNIX Specification now have this option.

For the few systems that still lack the capability, use the mkdirhier script (see Resources), which is a wrapper for mkdir that does the same function:

~ $ mkdirhier project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat/a}

Change the path; do not move the archive

Another bad usage pattern is moving a .tar archive file to a certain directory because it happens to be the directory you want to extract it in. You never need to do this. You can unpack any .tar archive file into any directory you like — that is what the -C option is for. Use the -C option when unpacking an archive file to specify the directory to unpack it in:
Listing 4. Example of good habit #2: Using option -C to unpack a .tar archive file

~ $ tar xvf -C tmp/a/b/c newarc.tar.gz

Making a habit of using -C is preferable to moving the archive file to where you want to unpack it, changing to that directory, and only then extracting its contents — especially if the archive file belongs somewhere else.

   

Combine your commands with control operators

You probably already know that in most shells, you can combine commands on a single command line by placing a semicolon (;) between them. The semicolon is a shell control operator, and while it is useful for stringing together multiple discrete commands on a single command line, it does not work for everything. For example, suppose you use a semicolon to combine two commands in which the proper execution of the second command depends entirely upon the successful completion of the first. If the first command does not exit as you expected, the second command still runs — and fails. Instead, use more appropriate control operators (some are described in this article). As long as your shell supports them, they are worth getting into the habit of using them.

Run a command only if another command returns a zero exit status

Use the && control operator to combine two commands so that the second is run only if the first command returns a zero exit status. In other words, if the first command runs successfully, the second command runs. If the first command fails, the second command does not run at all. For example:
Listing 5. Example of good habit #3: Combining commands with control operators

~ $ cd tmp/a/b/c && tar xvf ~/archive.tar

In this example, the contents of the archive are extracted into the ~/tmp/a/b/c directory unless that directory does not exist. If the directory does not exist, the tar command does not run, so nothing is extracted.

Run a command only if another command returns a non-zero exit status

Similarly, the || control operator separates two commands and runs the second command only if the first command returns a non-zero exit status. In other words, if the first command is successful, the second command does not run. If the first command fails, the second command does run. This operator is often used when testing for whether a given directory exists and, if not, it creates one:
Listing 6. Another example of good habit #3: Combining commands with control operators

~ $ cd tmp/a/b/c || mkdir -p tmp/a/b/c

You can also combine the control operators described in this section. Each works on the last command run:

Listing 7. A combined example of good habit #3: Combining commands with control operators

~ $ cd tmp/a/b/c || mkdir -p tmp/a/b/c && tar xvf -C tmp/a/b/c ~/archive.tar
 

Quote variables with caution

Always be careful with shell expansion and variable names. It is generally a good idea to enclose variable calls in double quotation marks, unless you have a good reason not to. Similarly, if you are directly following a variable name with alphanumeric text, be sure also to enclose the variable name in curly braces ({}) to distinguish it from the surrounding text. Otherwise, the shell interprets the trailing text as part of your variable name — and most likely returns a null value. Listing 8 provides examples of various quotation and non-quotation of variables and their effects.
Listing 8. Example of good habit #4: Quoting (and not quoting) a variable

~ $ ls tmp/
a b
~ $ VAR="tmp/*"
~ $ echo $VAR
tmp/a tmp/b
~ $ echo "$VAR"
tmp/*
~ $ echo $VARa

~ $ echo "$VARa"

~ $ echo "${VAR}a"
tmp/*a
~ $ echo ${VAR}a
tmp/a
~ $
 

Use escape sequences to manage long input

You have probably seen code examples in which a backslash (\) continues a long line over to the next line, and you know that most shells treat what you type over successive lines joined by a backslash as one long line. However, you might not take advantage of this function on the command line as often as you can. The backslash is especially handy if your terminal does not handle multi-line wrapping properly or when your command line is smaller than usual (such as when you have a long path on the prompt). The backslash is also useful for making sense of long input lines as you type them, as in the following example:
Listing 9. Example of good habit #5: Using a backslash for long input

~ $ cd tmp/a/b/c || \
> mkdir -p tmp/a/b/c && \
> tar xvf -C tmp/a/b/c ~/archive.tar

Alternatively, the following configuration also works:
Listing 10. Alternative example of good habit #5: Using a backslash for long input

~ $ cd tmp/a/b/c \
>                 || \
> mkdir -p tmp/a/b/c \
>                    && \
> tar xvf -C tmp/a/b/c ~/archive.tar

However you divide an input line over multiple lines, the shell always treats it as one continuous line, because it always strips out all the backslashes and extra spaces.

Note: In most shells, when you press the up arrow key, the entire multi-line entry is redrawn on a single, long input line.

 

Group your commands together in a list

Most shells have ways to group a set of commands together in a list so that you can pass their sum-total output down a pipeline or otherwise redirect any or all of its streams to the same place. You can generally do this by running a list of commands in a subshell or by running a list of commands in the current shell.

Run a list of commands in a subshell

Use parentheses to enclose a list of commands in a single group. Doing so runs the commands in a new subshell and allows you to redirect or otherwise collect the output of the whole, as in the following example:
Listing 11. Example of good habit #6: Running a list of commands in a subshell

~ $ ( cd tmp/a/b/c/ || mkdir -p tmp/a/b/c && \
> VAR=$PWD; cd ~; tar xvf -C $VAR archive.tar ) \
> | mailx admin -S "Archive contents"

In this example, the content of the archive is extracted in the tmp/a/b/c/ directory while the output of the grouped commands, including a list of extracted files, is mailed to the admin address.

The use of a subshell is preferable in cases when you are redefining environment variables in your list of commands and you do not want those definitions to apply to your current shell.

Run a list of commands in the current shell

Use curly braces ({}) to enclose a list of commands to run in the current shell. Make sure you include spaces between the braces and the actual commands, or the shell might not interpret the braces correctly. Also, make sure that the final command in your list ends with a semicolon, as in the following example:
Listing 12. Another example of good habit #6: Running a list of commands in the current shell

~ $ { cp ${VAR}a . && chown -R guest.guest a && \
> tar cvf newarchive.tar a; } | mailx admin -S "New archive"
 

Use xargs outside of find

Use the xargs tool as a filter for making good use of output culled from the find command. The general precept is that a find run provides a list of files that match some criteria. This list is passed on to xargs, which then runs some other useful command with that list of files as arguments, as in the following example:
Listing 13. Example of the classic use of the xargs tool

~ $ find some-file-criteria some-file-path | \
> xargs some-great-command-that-needs-filename-arguments

However, do not think of xargs as just a helper for find; it is one of those underutilized tools that, when you get into the habit of using it, you want to try on everything, including the following uses.

Passing a space-delimited list

In its simplest invocation, xargs is like a filter that takes as input a list (with each member on a single line). The tool puts those members on a single space-delimited line:
Listing 14. Example of output from the xargs tool

~ $ xargs
a
b
c
Control-D
a b c
~ $

You can send the output of any tool that outputs file names through xargs to get a list of arguments for some other tool that takes file names as an argument, as in the following example:
Listing 15. Example of using of the xargs tool

~/tmp $ ls -1 | xargs
December_Report.pdf README a archive.tar mkdirhier.sh
~/tmp $ ls -1 | xargs file
December_Report.pdf: PDF document, version 1.3
README: ASCII text
a: directory
archive.tar: POSIX tar archive
mkdirhier.sh: Bourne shell script text executable
~/tmp $

The xargs command is useful for more than passing file names. Use it any time you need to filter text into a single line:
Listing 16. Example of good habit #7: Using the xargs tool to filter text into a single line

~/tmp $ ls -l | xargs
-rw-r--r-- 7 joe joe 12043 Jan 27 20:36 December_Report.pdf -rw-r--r-- 1 \
root root 238 Dec 03 08:19 README drwxr-xr-x 38 joe joe 354082 Nov 02 \
16:07 a -rw-r--r-- 3 joe joe 5096 Dec 14 14:26 archive.tar -rwxr-xr-x 1 \
joe joe 3239 Sep 30 12:40 mkdirhier.sh
~/tmp $

Be cautious using xargs

Technically, a rare situation occurs in which you could get into trouble using xargs. By default, the end-of-file string is an underscore (_); if that character is sent as a single input argument, everything after it is ignored. As a precaution against this, use the -e flag, which, without arguments, turns off the end-of-file string completely.

 

Know when grep should do the counting — and when it should step aside

Avoid piping a grep to wc -l in order to count the number of lines of output. The -c option to grep gives a count of lines that match the specified pattern and is generally faster than a pipe to wc, as in the following example:
Listing 17. Example of good habit #8: Counting lines with and without grep

~ $ time grep and tmp/a/longfile.txt | wc -l
2811

real    0m0.097s
user    0m0.006s
sys     0m0.032s
~ $ time grep -c and tmp/a/longfile.txt
2811

real    0m0.013s
user    0m0.006s
sys     0m0.005s
~ $

An addition to the speed factor, the -c option is also a better way to do the counting. With multiple files, grep with the -c option returns a separate count for each file, one on each line, whereas a pipe to wc gives a total count for all files combined.

However, regardless of speed considerations, this example showcases another common error to avoid. These counting methods only give counts of the number of lines containing matched patterns — and if that is what you are looking for, that is great. But in cases where lines can have multiple instances of a particular pattern, these methods do not give you a true count of the actual number of instances matched. To count the number of instances, use wc to count, after all. First, run a grep command with the -o option, if your version supports it. This option outputs only the matched pattern, one on each line, and not the line itself. But you cannot use it in conjunction with the -c option, so use wc -l to count the lines, as in the following example:
Listing 18. Example of good habit #8: Counting pattern instances with grep

~ $ grep -o and tmp/a/longfile.txt | wc -l
3402
~ $

In this case, a call to wc is slightly faster than a second call to grep with a dummy pattern put in to match and count each line (such as grep -c).

 

Match certain fields in output, not just lines

A tool like awk is preferable to grep when you want to match the pattern in only a specific field in the lines of output and not just anywhere in the lines.

The following simplified example shows how to list only those files modified in December:
Listing 19. Example of bad habit #9: Using grep to find patterns in specific fields

~/tmp $ ls -l /tmp/a/b/c | grep Dec
-rw-r--r--  7 joe joe  12043 Jan 27 20:36 December_Report.pdf
-rw-r--r--  1 root root  238 Dec 03 08:19 README
-rw-r--r--  3 joe joe   5096 Dec 14 14:26 archive.tar
~/tmp $

In this example, grep filters the lines, outputting all files with Dec in their modification dates as well as in their names. Therefore, a file such as December_Report.pdf is matched, even if it has not been modified since January. This probably is not what you want. To match a pattern in a particular field, it is better to use awk, where a relational operator matches the exact field, as in the following example:
Listing 20. Example of good habit #9: Using awk to find patterns in specific fields

~/tmp $ ls -l | awk '$6 == "Dec"'
-rw-r--r--  3 joe joe   5096 Dec 14 14:26 archive.tar
-rw-r--r--  1 root root  238 Dec 03 08:19 README
~/tmp $

See Resources for more details about how to use awk.

 

Stop piping cats

A basic-but-common grep usage error involves piping the output of cat to grep to search the contents of a single file. This is absolutely unnecessary and a waste of time, because tools such as grep take file names as arguments. You simply do not need to use cat in this situation at all, as in the following example:
Listing 21. Example of good and bad habit #10: Using grep with and without cat

~ $ time cat tmp/a/longfile.txt | grep and
2811

real    0m0.015s
user    0m0.003s
sys     0m0.013s
~ $ time grep and tmp/a/longfile.txt
2811

real    0m0.010s
user    0m0.006s
sys     0m0.004s
~ $

This mistake applies to many tools. Because most tools take standard input as an argument using a hyphen (-), even the argument for using cat to intersperse multiple files with stdin is often not valid. It is really only necessary to concatenate first before a pipe when you use cat with one of its several filtering options.

How to get copy, conversion power with dd

January 24, 2008 at 6:43 pm | Posted in Uncategorized | Leave a comment

System administrators often have to copy data around. Copying and converting ordinary data is easily accomplished with the Linux command called cp. However, if the data is not ordinary, cp is not powerful enough. The needed power can be found in the dd command, and here are some ways to put that power to good use.The dd command handles convert-and-copy tasks. Obviously, “cc” would have been a better name, but there already was a command with that name when dd was invented. It doesn’t matter since it’s a cool command.

Cloning a hard drive with dd

The dd command doesn’t just copy files; it copies blocks, too. As a simple example, I’ll show you how to clone your entire hard drive.

Assuming that /dev/sda is the drive that you want to clone, and /dev/sdb is an empty drive that can be used as the target, using the dd command rather easy:

dd if=/dev/sda of=/dev/sdb

In that example, dd is used with only two parameters only: if, which is used to specify an input file, and of, which is used to specify the output file. In this case, both input and output files are device files. Wait until the command is finished, and you will end up with an exact copy of the original hard drive.

In the latter example, the contents of a device were copied to another device. A slight variation to that is the way that dd is used to clone a DVD or CD-ROM and write it to an ISO file. If your optical drive can be accessed via /dev/cdrom, then you can clone the optical disk using:

dd if=/dev/cdrom of=/tmp/cdrom.iso.

You can also mount that ISO-file using mount -o loop /tmp/cdrom.iso /mnt. Next, you can access the files in the ISO-file from the directory where the ISO is mounted.

Creating a backup of the Master Boot Record

So far, we have used dd to perform tasks that can be done with other utilities as well. Now, we can go beyond that. In the following example, create a backup of the Master Boot Record (MBR).

Copy the first 512 bytes of your hard drive, which contains the MBR, to a file. For instance, you could do this with the command

dd if=/dev/sda of=/boot/mbr_backup bs=512 count=1.

In this scenario, two new parameters are used:

  • First, there’s the parameter bs=512, which specifies that a blocksize of 512 bytes should be used.
  • Next, the parameter 1 is used to indicate that only one such block has to be copied. Without that parameter, you would copy your entire hard drive.

The backup copy of your MBR may be useful if, some day, you can’t boot your server anymore because of a problem in the MBR. In case that happens, just boot from a rescue disk and use the command:

dd if=/boot/mbr_backup of=/dev/sda bs=446 count=1

In this restore command, only 446 bytes are written back. This is because you may have changed the partition table since you’ve created the backup. By writing back only the first 446 bytes of your backup file, you don’t overwrite the original partition table, which is between bytes 447 and 511.

Extending swap space

In the last example of how dd can save your life, I’ll show you how to extend your swap space by adding a swap file.

Let’s say that you’re alerted at 3 a.m. by a message saying your server is about to run out of memory entirely, due to an undiscovered memory leak. All you have to do is create an empty file and specify that it should be added to the swap space.

Creating this empty file is an excellent task for dd:

dd if=/dev/zero of=/swapfile bs=1024 count=1000000

This would write a file of one gigabyte, and that can be added to the swap space using mkswap /swapfile and swapon /swapfile.

In this article, you’ve learned how to do some basic troubleshooting using the dd utility. As you have read, dd is a very versatile utility that goes far beyond the capabilities of ordinary copy tools like cp. Its abilities to work with blocks instead of files are especially valuable.

Find How Many Files are Open and How Many Allowed in Linux

January 22, 2008 at 8:14 pm | Posted in Uncategorized | Leave a comment

To find how many files are opne at any given time you can type this on the terminal: cat /proc/sys/fs/file-nrI got this number:
6240 ( total allocated file descriptors since boot)
0 ( total free allocated file descriptors)
94297 ( maximum open file descriptors)

Not that you can check the maximum open file by using this command: cat /proc/sys/fs/file-max

And change the max to your own like with this command: echo “804854″ > /proc/sys/fs/file-max

You can use lsof command to also check for the number of files currently open ( lsof | wc -l ), but this takes into account open files that are not using file descriptors such as directories, memory mapped files, and executable text files, and will actually show higher numbers than previous method.

6 ways to find files in linux

January 22, 2008 at 8:10 pm | Posted in Uncategorized | Leave a comment

Please see:5 ways to find files in linux in Linux

Installing VMware Workstation on Fedora 7

January 22, 2008 at 2:44 pm | Posted in Uncategorized | Leave a comment

1. Install software needed by VMware Workstation

  1. Install packages to build the kernel modules
    yum install gcc gcc-c++ kernel-devel
  2. Check the running kernel matches the kernel headers
    uname -r             # running kernel
    rpm -q kernel-devel  # installed kernel headers
  3. If the two versions do not match, run
    yum -y upgrade kernel kernel-devel
    reboot
  4. Find out where the kernel headers are (you may need this later)
    ls -d /usr/src/kernels/$(uname -r)*/include

2. Download VMware Workstation

  1. Go to http://www.vmware.com/download/ws/ if you haven’t already.

3. Do the install

  1. Uncompress Workstation
    tar zxvf VMware-workstation-5.5.4-44386.tar.gz
  2. Change directory
    cd vmware-distrib/
  3. Run the installer
    ./vmware-install.pl
  4. When asked Do you want to run vmware-config.pl?, answer “No”. You will need to patch.
  5. Backup vmware-config.pl
    cp /usr/bin/vmware-config.pl /usr/bin/vmware-config.pl.orig
  6. Patch vmware-config.pl
    cd /tmp/
    wget http://platan.vc.cvut.cz/ftp/pub/vmware/vmware-any-any-update113.tar.gz
    tar zxvf vmware-any-any-update113.tar.gz
    cd vmware-any-any-update113/
    ./runme.pl
  7. When asked Do you want to run vmware-config.pl?, answer “Yes”.

Flipping the Linux switch: New users guide to the terminal

January 18, 2008 at 1:17 pm | Posted in Uncategorized | Leave a comment

Please see:An Introduction to Linux Command Line Absolute Basics 

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.