Car Reviews, News Car and Technology Car

Sunday, 15 February 2009

A sample .profile file for Korn shell and Bash users

Introduction

This article presents a sample .profile file that can be used by Unix Korn shell and Bash users.

The intent of this brief article is to show some of the things that are possible with the Korn and Bash shells. Please feel free to use this .profile at your own site, and modify it as desired!


Brief discussion

Listing 1 shows our sample .profile file for Korn shell and Bash users. For Korn shell and Bash users, the .profile is the shell startup file, just like autoexec.bat is the startup file for DOS users.

#--------------------------------------------------------#
# A sample .profile file for Korn shell and Bash users #
#--------------------------------------------------------#
# Courtesy of Developer's Daily #
# http://www.DevDaily.com #
#--------------------------------------------------------#

PATH=$PATH:/usr/local/bin:/usr/gnu/bin:.

set -o vi # enable the ability to recall previous commands with
PS1='$PWD> ' # set the prompt to display the current directory

#---------------------------------#
# a few Korn/Bash shell aliases #
#---------------------------------#

alias lc="ls -C"
alias lm="ls -al | more"
alias dirs="ls -al | grep '^d'" # show the dir's in the current dir
alias h=history # show the history of commands issued

alias nu="who|wc -l" # nu - number of users
alias np="ps -ef|wc -l" # np - number of processes running
alias p="ps -ef"

# mimick a few DOS commands with these aliases:

alias cd..="cd ../.."
alias cd...="cd ../../.."
alias dir="ls -al"
alias edit=vi
alias help=man
alias path="echo $PATH"



Listing 1: profile - a sample .profile file for Korn shell and Bash users.



Download the .profile file

If you'd like to download the .profile into a separate window instead of cutting-and-pasting the text from Listing 1, click here, and the text will be displayed in the next window. Then just select File|Save As... from your browser to save the .profile to your local filesystem.

Labels: ,

Introduction to crontabs and cronjobs

Read First: It is important to note that there are different distributions of cron and your mileage may vary depending on the cron distribution, version, operating system, configuration and permissions allowed to you by your administrator.

What is Cron?
Cron is a Unix/Linux daemon that allows you to schedule the execution of tasks at regular intervals. Cron is commonly used by system administrators who wish to automate administration or by webmasters who need to have scripts run periodically to update, backup, etc. their web site. You can define both the program that is to be run as well as the minute, hour, day of week, day of the month, the month or any combination of these.

Cron is active from the time the computer is turned on to when the computer is turned off or cron is rebooted. It "wakes up" every minute and checks to see if it there are any tasks scheduled to be run in the current minute and runs them.

This list of tasks is called a cron table or crontabs for short. The crontabs is a schedule that lists commands to perform and the time/date that they are supposed to be run.

On most systems, you must get permission from the system administrator before you can submit job requests to cron. On many shared systems, because there is only one crontab file, only the administrator has access to the crontab command.

Crontab Usage
The crontab commands and functions will very slightly depending on the type of *nix you are running. You can type crontab at the shell or telnet prompt to view the available options. On our FreeBSD system we have the following:

Code :

crontab: usage error: file name must be specified for replace
usage: crontab [-u user] file
crontab [-u user] { -e | -l | -r }
(default operation is replace, per 1003.2)
-e (edit user's crontab)
-l (list user's crontab)
-r (delete user's crontab)


Below is more information about these options:

* crontab -e - Edit your crontab file, or create one if it doesn't already exist.

* crontab -l - Display your crontab file.

* crontab -r - Remove your crontab file.

* crontab -u user - Can be used with the -e, -l and/or -r options. This option allows you to modify or view the crontab file of user. When available, only administrators can use this option.



Crontab Fields
To create a new crontab, you would type crontab -e at a shell or telnet prompt. This will launch whatever the default text editor is (in our case this is PICO). Before we proceed, we need to understand the how the commands that you will enter are structured.

Each entry in a crontab file consists of six fields, specifying in the following order:
minute(s) hour(s) day(s) month(s) weekday(s) command(s)

The first five fields are as follows:

* The number of minutes after the hour (0 to 59)

* The hour in military time (24 hour) format (0 to 23)

* The day of the month (1 to 31)

* The month (1 to 12)

* The day of the week (0=Sunday) to 6=Saturday)



There are also three special characters that can be used in these fields as follows:

* Asterisks (*) - Asterisks are used to designate every instance in the field. For example, in the day field it would mean this command will run everyday
* Commas (,) - Commas are used to separate values in a field. For example, in the hours field 2,8,16 would mean this command will run at 2am, 8am and 4pm.

* Dashes (-) - Dashes are used to include every value between the first and last value in the day of the week field. For example, 2-4 would mean run on Tuesday to Thrusday. Note that 2,3,4 in the day of week field would accomplish the same thing.



Each field is separated by a space to designate the start of a new field this is why you must not leave spaces in the values of a field. For example, 1 - 6 will not work and should be 1-6.

The sixth field is where you would enter the command that you would like executed. For example, if we wanted to run a perl script called build.cgi, the 6th field would look like perl /full/path/to/build.cgi. If this is still a little hazy, the following examples section should firm it up.

Examples

Code :

00 1 * * * perl /full/path/to/build.cgi


This is a basic one. It would execute build.cgi every day at 1am.

Code :

00 2 15 3,6,9,12 * /full/path/to/indexer.pl


This would execute indexer.pl at 2am on the 15th of March, June, September and December.

Code :

00 3 1,15 * 1 /full/path/to/indexer.pl


This will execute indexer.pl at 3am every Monday AND on the 1st and 15th of the month.

Code :

30 17 * * 5 echo "Time for happy hour" | mail -s "Meet For Happy Hour" jsprague


This example would send me an email with a subject of "Meet For Happy Hour" every Friday at 5:30pm reminding me to go to happy hour.

Code :

00 17 * * 1-5 echo "Go home" | mail -s "It is 5pm" jsprague


This example would send me an email with a subject of "It is 5pm" and message of "Go home" every Monday - Friday at 5:00pm reminding me to stop working and go home.

Creating a New Crontab
To create a new crontab, you would type crontab -e at a shell or telnet prompt. This will launch whatever the default text editor is (in our case this is PICO), although it could be VI or other text editor. Once in your text editor, you simply type in one task per line and make sure that you leave a blank line at the end. When saving the file, the filename and location listed by the text editor should be correct. To test your work, go to a shell or telnet prompt and type crontab -l. If you did everything correctly, you should see a list of the tasks that you entered.

Labels: ,

Wednesday, 11 February 2009

System Hardening Process Checklist

Most administrators and security officers are well aware of the necessity of system hardening for corporate systems.

Hardening is the process of securing a system by reducing its surface of vulnerability. By the nature of operation, the more functions a system performs, the larger the vulnerability surface.

Since most systems are dedicated to one or two functions, reduction of possible vectors of attack is done by the removal of any software, user accounts or services that are not related and required by the planned system functions. System hardening is vendor specific process, since different system vendors install different elements in the default install process.

However, all system hardening efforts follow a generic process. So here is a checklist and diagram by which you can perform your hardening activities.

1. Perform initial System Install - stick the DVD in and go through the motions.
2. Remove unnecessary software - all systems come with a predefined set of software packages that are assumed to be useful to most users. Depending on your target use of the system, you should remove all software that is not to be used like graphics and office packages on a web server.
3. Disable or remove unnecessary usernames and passwords - most systems come with a lot of predefined user accounts for all kinds of purposes - from remote support to dedicated user accounts for specific services. Remove all remote and support accounts, and all accounts related to services which are not to be used. For all used accounts, ALWAYS change the default passwords.
4. Disable or remove unnecessary services - just as the two previous points, remove all services which are not to be used in production. You can always just disable them, but if you have the choice remove them altogether. This will prevent the possible errors of someone activating the disabled service further down the line.
5. Apply patches - after clearing the 'mess' of the default install, apply security and functionality patches for everything that is left in the system - especially the target services.
6. Run Nessus Scan - update your Nessus scanner and let her rip. Perform a full scan including dangerous scans. Do the scan without any firewalls on the path of the scan. Read through the results, there will always be some discoveries, so you need to analyze them.
7. If no Vulnerabilities are discovered, use system - after the analysis of the results, if there is noting significant discovered, congratulations! You have a hardened system ready for use.

Labels:

Example - Bypassing WiFi MAC Address Restriction

Among security professionals, it is a well known fact that using only MAC Address restriction is useless as a protection mechanism for WiFi. But for the general publiv, this is still a popular method. This post aims to show how easy it is to actually hijack someones MAC address and bypass this restriction.

Here is the process, as used on a Windows laptop

1. Obtain a valid MAC address that is allowed on the network
2. Download macshift, created by one of Internet's renaissance men - Nate True
3. Copy macshift.exe to c:\Windows\System32\
4. Find the windows name of your wireless connection, from the Network Connections, for example "Wireless Network Connection"
5. Open a Command Prompt(start->run->cmd.exe)
6. Obtain your adapter's MAC address, by typing ipconfig /all on the command prompt. The result will include the MAC address of all interfaces.
7. Type macshift VALID_MAC_ADDRESS -i "Wireless Network Connection"

8. Happy surfing

NOTE: Don't forget to change your MAC to it's original value when you are done!

The process without step 1 takes a total of 5 minutes. Now, it can be argued that it is not easy to obtain a valid MAC address, here are two scenarios:

* If the WiFi network does not allow for unlisted MAC addresses to associate, then you can :
o Put your WiFi card in monitor mode and capture some traffic - from there it is easy to find the MAC addresses
o Write a brute force program that will cycle the MAC address of your adapter and try to associate with the LAN. You can optimize the brute force by finding a laptop that can connect to the network and record the actual model. Then you can just cycle through half of the MAC address bytes
* If the WiFi network allows for unlisted MAC addresses to associate and then uses some sort of egress filtering, on the router or service selection gateway, things are much easier - just run a sniffer for 5 minutes and collect all other MAC addresses on the network. Filter out the gateway MAC, and at a later time (usually in the dead of night) try them one by one.

This example is presented just as an eye-opener to the readers with less security experience. MAC Address filtering may be used as a deterrent, but only with WPA2 encryption and minimal possible range of the WiFi access point signal.

Labels:

Security Information Gathering - Brief Example

When embarking on a security evaluation, the first stop for security information gathering is the Internet. Only connecting to the target public servers and DNS yields a wealth of information.
So here is an example what can be learned in a couple of minutes of checkup about a company domain from it's public servers, while NOT DOING ANYTHING ILLEGAL.

* Domain Name Servers (DNS) - Name servers are the first target of every information gathering. Once you know the domain name of a company, you should check it's DNS. Here is what it will give you
o The DNS Server provider - by checking who owns the IP you'll know whether it's in-house hosted DNS or outsourced. If it's in-house such a DNS server can be a prime target for inbound attacks, and such servers are less secure simply because the internal IT department is torn between administering all kinds of stuff.
o The level of isolation of zone transfers - A zone transfer is a completely legitimate function of a DNS server which is used to feed domain information from the primary server to the secondary servers. If it's open to any outsider, he/she can collect a list of all hosts registered in the domain for possible attack targets. Most zone transfer attempts will fail, but even the way they fail gives an excellent information
+ Failed with message REFUSED or NOAUTH - you can communicate to the server on the appropriate port (TCP 53) but zone transfer is not allowed. Even so, you can try to attack the server via TCP SYN flood on that port
+ Failed with message connection failed - you can't connect to the appropriate port, forget about zone transfers and TCP SYN flood
* Mail Exchanger (MX) - Mail exchangers are mail servers specifically dedicated to receiving e-mail for the target company domain. They usually are not the main corporate mail servers, but information from them can be useful to understand what types of adversaries are on the other side if you choose an e-mail vector of attack. And here is the summary of info from the MX
o Mail server provider - by checking who owns the IP you'll know whether it's in-house hosted MX or outsourced. If it's in-house such a MX server can be a good target for inbound attacks.
o Mail server banner - the default banner, unless modified gives the information about the server software, so you'll know what you're up against and search for known vulnerabilities.
* Web server - the same elements that apply to MX apply here, so we won't repeat them again.
* Typical server names - while the generic servers are in scope of the security administrators and usually well secured, a company can have any number of registered servers for testing or internal uses. These servers are in most cases excellent targets for attack, since they are usually 'temporary' and not treated by corporate policies. These server names can include 'www1', 'test', 'dc', 'gc', 'domain', 'mail', 'pop' and the like.
Tools of the trade
There are a lot of tools that can help you in information gathering. I have written a small program that will get you started. Here is a screenshot

Also, to check who owns an IP address, you should make good use the whois services of the Internet registries like RIPE, APNIC, AfriNIC, ARIN and LACNIC

Labels:

SQL Server Bulk Import - BCP HOW TO

A lot of people using the free MS SQL Server 2005 Express hit a brick wall when they try to import data into the created database. Here is a tutorial, with video demo included on how to use the command-line BCP tool to import data into MS SQL Server 2005 Express.

During an analysis i conducted in the past days, I also found out the hard way that MS SQL Server 2005 Express does not have a GUI based Data Transformation Services. The only thing it does have is a BCP command-line tool.

So, here is a step-by-step tutorial how to use the BCP tool and not give up on an otherwise good (and free) product:

1. The data - I am importing data collected by tcpdump. I stored the data into a CSV file (data.csv), a text file with a comma delimiter.
2. Here is a sample row 16,10.176.1.105,NULL,10.176.1.254,NULL,NULL,64,17.12.2007,19:20:52,520,PING Req,NULL
3. Creating the database - Log-in with the command-line sql tool (sqlcmd) and use the following set of commands to create the database and table for storing of imported data:

* sqlcmd -S ATLAS\SQLEXPRESS
* create database data_analysis
* go
* use data_analysis
* go
* create table data_import (
* [No_packet] [int] NULL ,
* [Src_Logical] [varchar] (255) ,
* [Src_Port] [varchar] (255) ,
* [Dest_Logical] [varchar] (255) ,
* [Dest_Port] [varchar] (255) ,
* [Flags] [varchar] (255) ,
* [Packet_Size] [int] NULL ,
* [Packet_Date] [varchar] (255) ,
* [Absolute_Time] [varchar] (255),
* [Additional] [varchar] (255) ,
* [Protocol] [varchar] (255) ,
* [newdata] [varchar] (255)
* )
* go

Content verification - To verify the contents of the created table, use the following set of commands

* use data_analysis
* select count(*) from data_import
* go
* quit

Data import - To import the data, use the following command

* bcp data_analysis.dbo.data_import in data.csv -T -C1250 -c -t, -S ATLAS\SQLEXPRESS

Detailed explanation

* bcp - the executable file name
* data_analysis.dbo.data_import - name of database, owner and name of table to receive the data
* in - the same command is used for export and import. in means importing, out means exporting
* data.csv - file name that contains data to be imported, or to receive exported data when using the out direction
* -T - swich indicating trusted connection. When using this switch, the bcp command uses the kerberos ticket of the logged-on Windows user to authenticate. If you don't use -T, you'll have to use -U and -P (user name/password)
* -C1250 - collation. I found out that BCP does not work well with Unicode files, so i am forcing the 1250 collation (central European) - works with most characters
* -c - treat everything as characters. This way it will be very easy to import any information.
* -t, - delimiter. Default delimiter for BCP is tab, so i need to inform it of my delimiter character (comma)
* -S ATLAS\SQLEXPRESS - server. This switch is followed by the hostname\instance name (for MS SQL Server Express its SQLEXPRESS)

Content verification - To verify the contents of the created table, use the following set of commands
* use data_analysis
* select count(*) from data_import
* go
* quit

Labels:

Security Concerns Cloud “Cloud Computing”

Dark security clouds are gathering above what has been termed “cloud computing”– the resourceful Software as a Service (SaaS) model that provides applications, memory space and other services to companies who need them. The introduction of cloud computing saw rave reviews for this storage and administration model, with pundits calling it the next paradigm shift in the world of computing. The silver lining that shone so bright a year or so ago has now dimmed to invisible levels, and people are wising up to the security issues that go hand in hand with cloud computing.

Protecting your information on your own systems is a task that’s hard to manage even with the best of resources, and so, it makes sense to turn to the big guns like IBM, Google, Amazon, Dell and others when you’re a fledgling in the security department. You believe that they have the wherewithal to provide adequate protection for your data and applications. But the attack on Amazon’s cloud servers sometime during the middle of 2008 has turned the tide, and security has become a huge concern again.

Research firm Gartner lists these factors among the top security concerns in cloud computing –

* user access to data and information
* compliance with regulations
* location of the data
* the encryption used at every level
* recovery measures in the event of a security breach
* investigative support
* long-term viability of the agreement between the provider and the user.


The biggest security concern with cloud computing is the issue of trust:

1. How do you know for certain that the key people who manage your data and applications on the cloud are completely trustworthy?
2. Who else besides you has access to sensitive information?


If an MP3 player bought at a thrift store for a mere $9 is found to hold secret military information, it means that the Pentagon’s security system itself is a serious cause for concern. If a question mark hangs over something as important as national security, how safe is the information belonging to the rest of us?

Another issue that looms large when we consider cloud computing is the fact that we’re putting all our eggs in one basket. Hackers know that if they’re able to render one cloud vulnerable, they can have a field day – they can bring down a host of sites and steal a ton of information. It’s like bringing down an entire world with just one humongous weapon.

Besides this, there’s also the fact that data storage is not standardized – each provider of cloud computing services has its own formats and standards, and this makes it more difficult to switch to a different host when you feel that your current provider has been compromised.

These are the early days of cloud computing, and I’m sure many more security concerns will emerge out of the woodwork as the days go by and hackers try new tricks to gain entry into these networks. The only silver lining in these dark clouds is that there is always some brainstorming going on as to how to keep one step ahead of the bad guys and protect sensitive data and applications.

Labels:

Tuesday, 10 February 2009

Computer Cloning with Partition Image

When you have over a hundred computers to install, you really start to scratch your head and think that it would be nice not to do the same installation a hundred times. When I faced this problem, I looked at computer cloning as a solution. I did not want to spend big bucks on commercial software like Norton Ghost. I know that some people might be skeptical about using Open Source software, but I gave partimage a try and found it to work very nicely.
Because you have to boot a different OS in order to clone the one on the hard drive, I downloaded System Rescue CD. The only problem with it was that it does not boot with SATA hard drives. There is a known bug listed in the project bug list. If you need to clone SATA hard drives, you can use an installation of Linux with partimage on a separate HDD.

Experienced Linux users might point out that there is also the venerable dd command, which makes a bit-by-bit copy of the given partition. The drawback of dd is that the images created are much larger than ones created with partimage, because partimage saves only used portions of the drive.

Case study

I would like to use the following example to show how to clone a Linux installation to a different computer. First, I have to note that the new computer where you will put a copy of the drive image needs to have a motherboard with the same architecture as the original one. Otherwise, Linux will not boot.

Now, let's start. I have a computer with Fedora Core 2 Linux installed on an IDE drive with the following partitions:


/dev/hda1 /boot
/dev/hda2 /
/dev/hda3 swap
/dev/hda4 /home

I would like to create images of these partitions and use them to make an exact duplicate on a drive of the same size in another computer.
Part 1: Make an HDD Image of the Installation

I connect another HDD as a secondary master, where I will put hard drive images of the first disk, and boot using System Rescue CD. During booting, it asks for a keyboard to use, then offers the # prompt. First, we need to mount a partition on the secondary master drive:


# mount /dev/hdc4 /mnt/temp1

Under temp1, we can make a directory to store our images.


# mkdir /mnt/temp1/fedora_core2_template
# cd /mnt/temp1/fedora_core2_template

Now, it's time to save the Master Boot Record and Partition Table information of the /dev/hda drive.


# dd if=/dev/hda of=fedora_core2_template.hda.mbr count=1 bs=512

I use the .mbr extension just to show that this is a Master Boot Record.


# sfdisk -d /dev/hda > fedora_core2_template.hda.pt

.pt is for Partition Table.

Now, we're ready to run partimage to save the contents of the /dev/hda1, /dev/hda2, and /dev/hda4 partitions. We do not need to image the swap partition, as it can be created after applying the partition table information to a new drive.

To save a partition, I use the following command:


# partimage -b -z1 -o -V700 save /dev/hda1 fedora_core2.hda1.partimg.gz

This will create a compressed file of the first partition and, if it is larger than 700 MB, split it into multiple 700 MB files which end with 000, 001, ..., ###. 700 Mb is just enough to put one file on a CD, if you ever want to backup your installation. After executing the above command, I type a description of the image and hit F5 to continue.

I will not explain each flag I use in a command, since you can view them by running partimage --help. The reason I am not using the graphical interface is that you may get confused by the choice of which partition to back up, as partimage does not use /dev/hda# in describing partitions.

Repeat the above command for the /dev/hda2 and /dev/hda4 partitions, and a copy of the first hard drive is done.


# partimage -b -z1 -o -V700 save /dev/hda2 fedora_core2.hda2.partimg.gz
# partimage -b -z1 -o -V700 save /dev/hda4 fedora_core2.hda4.partimg.gz

Part 2: Restore the Image to a New Drive on a Different Computer

The new computer can have an HDD of the same size or larger. Images we made in the first part of the tutorial cannot be applied to a smaller HDD than the one we made a copy from. Connect an HDD with images as a secondary master and boot with System Rescue CD. When you get to the # prompt, mount the partition on the second drive.


# mount /dev/hdc4 /mnt/temp1
# cd /mnt/temp1/fedora_core2_template

Now, we can restore the master boot record on the new drive.


# dd if=fedora_core2_template.hda.mbr of=/dev/hda

Before we can run partimage, we also need to apply partition table information to the new drive.


# sfdisk /dev/hda <>

Labels:

A fresh take on the browser

At Google, we have a saying: “launch early and iterate.” While this approach is usually limited to our engineers, it apparently applies to our mailroom as well! As you may have read in the blogosphere, we hit "send" a bit early on a comic book introducing our new open source browser, Google Chrome. As we believe in access to information for everyone, we've now made the comic publicly available -- you can find it here. We will be launching the beta version of Google Chrome tomorrow in more than 100 countries.

So why are we launching Google Chrome? Because we believe we can add value for users and, at the same time, help drive innovation on the web.

All of us at Google spend much of our time working inside a browser. We search, chat, email and collaborate in a browser. And in our spare time, we shop, bank, read news and keep in touch with friends -- all using a browser. Because we spend so much time online, we began seriously thinking about what kind of browser could exist if we started from scratch and built on the best elements out there. We realized that the web had evolved from mainly simple text pages to rich, interactive applications and that we needed to completely rethink the browser. What we really needed was not just a browser, but also a modern platform for web pages and applications, and that's what we set out to build.

On the surface, we designed a browser window that is streamlined and simple. To most people, it isn't the browser that matters. It's only a tool to run the important stuff -- the pages, sites and applications that make up the web. Like the classic Google homepage, Google Chrome is clean and fast. It gets out of your way and gets you where you want to go.

Under the hood, we were able to build the foundation of a browser that runs today's complex web applications much better. By keeping each tab in an isolated "sandbox", we were able to prevent one tab from crashing another and provide improved protection from rogue sites. We improved speed and responsiveness across the board. We also built a more powerful JavaScript engine, V8, to power the next generation of web applications that aren't even possible in today's browsers.

This is just the beginning -- Google Chrome is far from done. We're releasing this beta for Windows to start the broader discussion and hear from you as quickly as possible. We're hard at work building versions for Mac and Linux too, and will continue to make it even faster and more robust.

We owe a great debt to many open source projects, and we're committed to continuing on their path. We've used components from Apple's WebKit and Mozilla's Firefox, among others -- and in that spirit, we are making all of our code open source as well. We hope to collaborate with the entire community to help drive the web forward.

The web gets better with more options and innovation. Google Chrome is another option, and we hope it contributes to making the web even better.

So check in again tomorrow to try Google Chrome for yourself. We'll post an update here as soon as it's ready.

Update @ 3:30 PM: We've added a link to our comic book explaining Google Chrome.

Posted by Sundar Pichai, VP Product Management, and Linus Upson, Engineering Director

Labels:

Welcome

Welcome