Archive for the ‘Server Admin’ Category

A Step by Step Guide to Setup Rails application on Ec2 instance (Ubuntu Server)

Sometimes the Bitnami or other Rails AMIs doesn’t fit your needs directly and you will feel the need of building the Server yourself.Here I go step by step in building such a stack on top of Amaon EC2 Ubuntu Server.
  • Rails applications are a little bit different to install on servers but the process is very easy.Rails application needs a web server and an application server to run with. For development, it comes with default Webrick server that serve as application server on local machine. For setting it up on production server, we have the following choices on Web and application servers :-

    • Web servers

      1. Apache

      2. Nginx

    • Application Servers

      1. Passenger

      2. Thin

      3. Puma

      4. Unicorn

  • The simplest and best combination consists of Nginx + Passenger. It allows greater flexibility for configuration and also allows good speed over other combinations. So we are going to setup an Rails application using Nginx + passenger configuration on a bare Ubuntu server. Here are the steps :-

  1. Launch an Ec2 instance with ubuntu AMI. Make sure you have HTTP and SSH access to the server.

  2. SSH into the server by using private key (.pem) used while launching the instance and install the available updates by running :-

    sudo apt-get install updates
  3. Now you need to setup ruby on your server, so install the single user rvm ruby by following this blog.
  4. Load the rvm and make the installed ruby as default by running the following commands :-

    source ~/.rvm/scripts/rvm
    rvm use 2.1.0 –default
  5. Install the version control to clone your rails application to server. We generally use Git with rails application which can be installed by running the following command :-

    sudo apt-get install git
  6. Now clone your application on the server :-

    git clone yourepo.git

    Note:- In case of private git repository, you need to add public key of server to deploy keys of your repository, otherwise you will be promped with an permission denied error.

    OR

    Deploy using application to this server using Capistrano script. Please read this blog for more details on deploying your application using Capistrano.

  7. Now go to your application and install the gems by running bundle install command. If you want to setup your database on the same server, you can do the same by using the following commands 😐

    • In case of MYSQL

      sudo apt-get install mysql-server mysql-client
      sudo apt-get install libmysql++-dev
    • In case of POSTGRESQL, follow this blog for installation and then install the development headers

      sudo apt-get install libpq-dev

      After setting this up, migrate your databases in whichever environment you want to launch the server.

  8. Now install the Passenger gem by running :-

    gem install passenger

  9. Next step is to install the Nginx server, but we have some pre-requisits for this.

      1. It needs curl development headers which can be installed by :-

        sudo apt-get install libcurl4-openssl-dev

      2. It will be installed under /opt directory and your user should have permissions to that folder, so make your user as user owner for /opt directory by :-

        sudo chown -R ubuntu /opt

  10. Now install the Nginx server with passenger extension by running the following command :-

    passenger-install-nginx-module

  11. Set your Nginx server as service in init script by using the following commands :-

    wget -O init-deb.sh http://library.linode.com/assets/660-init-deb.sh
    sudo mv init-deb.sh /etc/init.d/nginx
    sudo chmod  +x /etc/init.d/nginx
    sudo /usr/sbin/update-rc.d -f nginx defaults
  12. Setup your application path in the nginx configuration file i.e. /opt/nginx/conf/nginx.conf.

    server {

    listen 80;

    server_name localhost;

    root /home/ubuntu/my_application/public #<-- be sure to point to 'public'

    passenger_enabled on;

    rails_env production;

    }

  13. Lastly start your server by running the following command :-

    sudo service nginx start

     


S3cmd : A commandline tool to manage data on Amazon S3

S3cmd is a tool for managing data on Amazon S3 storage. It is a simple and easy to use command line tool through which user can easily access s3 data. It is also ideal for scripts, automated backups triggered from cron, sync etc. s3cmd is an open source project and is free for both commercial and private use. You will only have to pay Amazon for using the s3 storage, and no money is needed to pay for using s3cmd tool. Installation: –The installation of s3cmd is very simple and comprises of single command. sudo apt-get update && sudo apt-get install s3cmd Configure s3cmd: – To access s3 storage using s3cmd tool you need to configure your s3cmd tool with S3 ACCESSKEYID, SECRETACCESSKEY. So, to do this type the command
s3cmd --configure
And just supply your s3 creds when prompt to do so. You do not need to configure other options just leave it blank for if you don’t know about it, and save your changes when prompted. This will create a .s3cfg file in your root directory containing the information to access the AWS storage. The disadvantage of using s3cmd is that at one time you can use only one account. The disadvantage of using s3cmd is that at one time you can use only one account and to change the account you have to reconfigure your s3cmd by using the “s3cmd –configure” command. Access: To test whether s3cmd is working fine or not, just type the following command.
s3cmd ls
It will show all the buckets you are having on the s3. And to access a particular bucket just use the path of that bucket. For example you have a bucket with path ‘s3://fh-assets’. Just use:
s3cmd ls s3://fh-assets
This will list the content of this bucket only You can easily read from or write to s3 storage using s3cmd get/put options. GET is for reading and PUT is for writing. For example we have a file abc.pdf on s3 and we want to retrieve that to local system. Just use the following command.
s3cmd get s3://fh-assets/abc.pdf 123.pdf
This will copy abc.pdf in the current directory with a name 123.pdf And to put a file into s3 use the following command
s3cmd put 123.pdf s3://fh-assets/abc.pdf
Similarly if you want to copy a folder just use -r . SYNC:This command is used to sync local directory with buckets.for example:
s3cmd sync local/directory bucket_name
Here local/directory is the source and bucket_name is destination in which the modifications occur. We can use sync command to both upload and download from s3. Just we need to change the source and the destination. There are various options available with sync option, but before moving forward we need to see that what types of transfers s2cmd performs, as this will help us to understand s3cmd sync better. In s3cmd there are basically two modes of transfer i.e.
  • Unconditional transfer — In this all matching files are uploaded to S3 or downloaded back from S3. This is similar to a standard unix cp command.
  • Conditional transfer — In this, only those files that don’t exist at the destination in the same version are transferred by the s3cmd sync command. By default a md5 checksum and file size is compared. This is similar to a unix rsync command Now continuing with options that are available with s3cmd are:-
  • –dry-run – > In this run we’ll first check with —dry-run to see what would be uploaded.
  • –skip-existing – > if you don’t want to compare checksums and sizes of the remote vs local files and only want to upload those that are new.
  • –delete-removed -> to get a list of files that exist remotely but are no longer present locally.
Enjoy the handy new tool in your kitty !!

S3FS : Mounting Amazon S3 as a filesystem

S3FS (Simple Storage Services File System) is a FUSE based file system backed by Amazon S3 storage buckets, and if mounted once, S3 can be used just like a drive in our local system. This helps a lot if you have an app which interacts very frequently with Amazon S3. Follow these steps to mount a bucket from s3 to local system using S3fs in UBUNTU 1. First of all you need to install the libraries that are needed by s3fs to run on the system. To install those libraries run the following command in console sudo aptitude install build-essential libcurl4-openssl-dev libxml2-dev libfuse-dev comerr-dev libfuse2 libidn11-dev libkadm55 libkrb5-dev libldap2-dev libselinux1-dev libsepol1-dev pkg-config fuse-utils sshfs 2. After that Download the appropriate version of s3fs you want to use by the following link In this example we are using s3fs-1.68, and can be downloaded from this link 3. After downloading the s3fs go to that path in your terminal and run the following commands
  • tar xvzf s3fs-1.68.tar.gz
  • cd s3fs-1.68/
  • ./configure –prefix=/usr
  • make
  • make install (as root)
4. After this you need to supply your s3 credentials to s3fs to mount the bucket. There are different ways of doing that, you can choose whichever is suitable to you
  • By setting the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables by using export command in UBUNTU. export AWSACCESSKEYID=******************* export AWSSECRETACCESSKEY=xxxxxxxxxxxx
  • By using a .passwd-s3fs file in your home directory. In this you just need to create the .passwd-s3fs file in home directory, and add the creds for s3 into it. The s3fs password file has this format (use this format if you have only one set of credentials): accessKeyId:secretAccessKey If have more than one set of credentials, then you can have default credentials as specified above, but this syntax will be recognized as well: bucketName:accessKeyId:secretAccessKey
  • By using the system-wide /etc/passwd-s3fs file. In this you just need to create the passwd-s3fs file in /etc directory, and add the creds for s3 into it similar to the last point.
5. After that change the permissions of the password file
  • If you are using ~/.passwd-s3fs then permission should be 600
  • If you are using /etc/passwd-s3fs then permission should be 640
6. And last, to mount bucket(mybucket) to specific directory(/path/to/directory/) s3fs mybucket /path/to/directory/ 7. To unmount the mounted drive use this command. fusermount -u /path/to/directory/ 8. To automount s3 bucket to every time to local system after boot, edit the /etc/fstab file and add this at the end of the file s3fs#bucket_name /path/to/directory/ passwd_file=/path/to/passwd/file And after this edit /etc/rc.local and add the following line before exit 0. mount -a 9. If you get any error/warning during system boot mounting just add the option in the current command s3fs#bucket_name /path/to/directory/ fuse _netdev,passwd_file=/path/to/passwd/file This command will omit the warnings and mount the bucket to your local system. Now you can access Amazon S3 just like another folder on your file system.

RSYNC

Rsync stands for remote sync. Rsync is used for synchronization of files and directories between two locations. So it can serve as a backup process also. Important features of rsync :-
    • Speed: File transfer with rsync is fast and efficient because it checks local files against remote files in small chunks, or blocks, and transfers only the blocks that differ between the files.
    • Security: Rsync allows encryption of data using ssh protocol during transfer.
    • Less Bandwidth: Rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols.
    • Privileges: No special privileges are required to install and execute rsync
Syntax :-
$ rsync [options]
[destination] Source and destination could be either local or remote. In case of remote, we also need to specify the login name, remote server name and location. Install rsync :-
  • On Ubuntu :- $ sudo apt-get install rsync
  • On Red Hat Enterprise Linux (RHEL) / CentOS 4.x or older version : – # up2date rsync
  • On RHEL / CentOS 5.x or newer (or Fedora Linux) : –# yum install rsync
Lets go through various common operations that we can perform using rsync. Synchronize between two directories on same system :- Let’s say, there are two folder on the personal computer rsync1 and rsync2. So we have to synchronise the data in these two folders ( Assuming these folders are located in present working directory). The command to sync the data between two will be like:-
$ rsync -vv -e ssh rsync1/* rsync2
rsync1/* for sync all the files and sub directories in the rsync1 directory. Options -v = verbose (-vv for more details) -z = compress file data -e ssh = for enable the secure remote connection -a = Recursive mode, Preserves timestamp, Preserves owner and group -u = Do not overwrite a file at the destination, if it is modified -d = Synchronize only directory tree from source to the destination –progress = Displays detailed progress of rsync execution -i = Option displays the item changes. – -max-size = not to copy files more than max size given.
$ rsync -azvu --delete rsync1/. rsync2/.
For deleting the extra files from destination –times = Set the timestamps of the destination file to match those of the source file, instead of using the time of transfer (that is, reflecting the existence of a new file on the destination host) –exclude = Don’t transfer files whose names match glob-pattern. (glob-pattern like “*-*-*- *-*.*”) –include = specify files to be transferred whoes name matches glob-pattern(“*-*-*-*-*.*”) Synchronize Files From Local to Remote :-
$ rsync -avz -e ssh sync1/* 
username@machinename:path sync1 is local system directory (source detination) username@machinename:path (target destination). We can also use rsh to enable the ssh key :-
$ remote_shell = "ssh -i pathToKey username@machinename"
 $ rsync -avz -e --rsh=\remote_shell\ sync1/* username@machinename:path
Synchronize Files From Remote to Local
$ rsync -avz -e ssh username@machinename:path sync1/*
sync1 is local system directory (target destination) username@machinename:path (source destination) We can also use rsh to enable the ssh key :-
remote_shell = "ssh -i pathToKey username@machinename"
$ rsync -avz -e --rsh=\remote_shell\ sync1/* username@machinename:path

Using Localtunnel in Rails Application

Localtunnel, as the name indicates, allows you to share your local web server on the internet. This kind of technique is very useful in cases such as:-
  • You want to demonstrate your web applications to clients.
  • Testing the web applications for compatibility issues against various operating systems and web browsers
  • Testing your web applications for compatibility in various mobile devices.
It is very easy to use. Your web application will be shared on public internet in a couple of seconds while it is running on your local system. Lets go through a step by step procedure to tunnel our very own RoR apps: STEP 1: Firstly, you need to install the client library for Localtunnel. So just install it by installing the ‘localtunnel’ gem or write it in your Gemfile:-
gem 'localtunnel'
and run ‘bundle install’ STEP 2: Run your local web server on any port! Let’s say you’re running on port 8080.
$rails s -p 8080
STEP 3: Now run localtunnel by specifying it the on port to be shared as:-
$ localtunnel 8080
It will establish a connection between your local server running at 8080 and localtunnel.com. Note that for first time you run the localtunnel, it needs you to specify your ssh public key for authentication. Here’s an example:
$ localtunnel -k ~/.ssh/id_rsa.pub 8080
After running the above commands, you will see something like this: – Port 8080 is now publicly accessible from http://xyz.localtunnel.com … Enjoy Tunnelling your apps !!

Attaching an EBS volume to Amazon EC2 instance

The EC2 backed Amazon’s instances come with a limited storage space by default(8GB to be specific mounted on /). However, some applications may require much more instance space. This can be accomplished by attaching an EBS storage to the your EC2 instance. Attaching extra storage on EC2 instance is pretty easy and you got to follow a few very simple steps:
  • Allocate a new EBS in your amazon AWS EC2 dashboard. The option comes under the heading volumes in the major head Elastic Block Storages. You will be asked for the size you want to allocate. The availability zone must be the same as the Amazon EC2 instance’s zone to which you want to connect this volume to.
  • In the list of volumes, attach the newly created EBS volume to the desired instance(As you select the volume using the checkbox, an option will appear on the top to attach it to an instance).
  • Now connect to your instance using ssh and run sudo fdisk -l. In the meantime do checkout our latest SIP Calculator fdisk will show the list of volumes and your newly created volume should appear unpartioned in this list. Format the file system /dev/xvdf(ubuntu’s external name to your drive) using the following command:
    sudo mkfs.ext4 /dev/xvdf
  • Mount the volume onto the system and add it to /etc/fstab for mounting at the system start(Here we assume that you will mount to /vol, this can be any directory that you like):
    sudo mkdir -m 000 /vol
    echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
    	sudo mount /vol 
  • Thats it, enjoy your extended partition !!

Installing redmine on a shared host(Bluehost/Dreamhost/Apthost/Hostgator etc)

Redmine is a popular open source tool built on top of RoR. Recently, we@enbake got a task to install Redmine on a shared host.  We faced several problems during the process to which we thought that documentation were necessary. I will take you through a step by step guide to install redmine on a shared host. The host happened to be apthost for our case but i guess that the same would apply for bluehost, dreamhost, apthost and other shared hosts.
  1. Ensure that you have the ssh access to your server. If you dont have currently, then please apply for ssh access to your host.
  2. Once you have ssh access. Login into server via ssh.
  3. Please check the ruby and the rails version on your host and download the appropriate redmine source code from http://www.redmine.org/projects/redmine/wiki/Download
  4. Extract redmine to the target folder. For the ease of this guide, we assume that redmine was extracted to ~/redmine.
  5. Create a database using mysql on your server(most probably using cpanel) and add appropriate database settings to the rails config/database.yml.
  6. Install the gems required by redmine using rake gems:install. The command will install gems onto your user home directory and you might need to add the gem path to your .bash_profile.
  7. If your .bash_profile does not contain the following directives, please add them to it:
  8. export GEM_PATH=/usr/lib/ruby/gems/1.8:~/ruby/gems export GEM_HOME=~/ruby/gems (here ~/ruby/gems is the path where local gems are usually installed)
  9. Redmine stores session data in cookies by default, which requires a secret to be generated. So we need to generate session store secret. Under the application main directory run: rake generate_session_store
  10. Create the database structure by running the redmine migrations: RAILS_ENV=production rake db:migrate
  11. Migrations will create all the tables and an administrator account with admin/admin as username/password.
  12. The user who runs Redmine must have write permission on the following subdirectories: files, log, tmp & public/plugin_assets:- chown -R user:group files log tmp public/plugin_assets chmod -R 755 files log tmp public/plugin_assets
  13. Redmine comes with a prebuilt dispatcher and .htaccess files. Rename the .example dispatcher files to their respective extensions. Similarly rename the example .htaccess to .htaccess
  14. Test the installation by running the WEBrick web server. Under the main application directory run: ruby script/server -e production
  15. If there is no error in step 14, then the redmine is all set and now you should move ahead to configure apache to serve the redmine. Otherwise, please move back and check the errors.
  16. We will install redmine on a subdomain in this guide since most of the shared hosts support subdomaining.
  17. Create a subdomain using cpanel’s subdomain management option. Give it some path in the apache directory. Normally ~/public_html. Lets say the path is ~/public_html/redmine and the subdomain is redmine.tld.com for the purpose of this guide.
  18. Now move to shell again and make a symbolic link in ~/public_html/redmine to your redmine install’s public directory (~/redmine) in this case. This command will create a symbolic link : ln -s ~/redmine/public ~/public_html/redmine
  19. Add the local gem path in your config/environment.rb file ,  ENV[‘GEM_PATH’]=’/home/username/ruby/gems:/usr/lib/ruby/gems/1.8′
  20. Run the application by entering your application’s url in browser and your shiny new redmine install should be there.
Enjoy Managing your projects !!

Using Amazon SES for email delivery in Rails 3

Recently, Amazon added a new product to their cloud services kitty called as Amazon SES(Amazon Simple Email Service). The service can be used for bulk and transactional emails by developers and businesses alike. So as you might have guessed by now, the service competes head on with Postmark, Sendgrid, socketlabs etc. The simple reason for scrapping the said services and going ahead with Amazon SES is the price, thats it !! Now, lets dive deep into how to get Amazon SES running with a simple web application(We will be using rails3 as an example client application in this writeup): Setting up Amazon SES for delivering emails 1. Apply for the activation of SES in your amazon AWS account. If you are an existing AWS user, you will already be getting Amazon marketing promos to activate SES. Or you can sign up for ses here. 2. By default Amazon SES will be activated in a sandbox mode. This means that all the email addresses that you will be sending mails to, will need to be verified otherwise the email wont be delivered. Also, with verified email addresses, you can send up to 200 messages per day, at a maximum rate of 1 message per second 3. To get your email addresses verified, download Amazon SES scripts 4. To run the SES verification script, Perl must be installed on your system with the dependent modules. 5. Run the following command: ses-verify-email-address.pl -k creds.txt -v Amazon will send an email at the specified address. Please click the link in that email and you are done with the verification process. You are now done with verifying this email address. Please note that creds.txt here contains the AWS credentials. A sample creds.txt file looks like: AWSAccessKeyId= your_access_key AWSSecretKey= your_secret_key 6. After testing your emails in the sandbox, you can click here to Request for Production Access to Simple Email Service. They say it may take up to 24 hours, but my access request was granted in about 30 minutes – now I can send to any address! This was all about setting up Amazon SES. Now lets move to integrating the setup with a Rails 3 application. Configuring a Rails 3 application to send mails through Amazon SES: There are already a number of open source gems/plugins available to configure action mailer to send email thro Amazon SES but we decided to go ahead with drewblas/aws-ses (the aws-ses gem) for a simple reason that it works painlessly. Now: 1. Add ses to the gem file
gem "aws-ses"
2. Extend ActionMailer in config/initializers/your_config_file.rb, where config_file.rb is the name of the file which contains initialization routines for ses gem:
ActionMailer::Base.add_delivery_method :ses, AWS::SES::Base,
  :access_key_id     => ENV['AMAZON_ACCESS_KEY'],
  :secret_access_key => ENV['AMAZON_SECRET_KEY']
3. Configure action mailer to send thro ses. In config/environments/*.rb
config.action_mailer.delivery_method = :ses
Thats it !! You are now ready to send mails from rails app thro Amazon SES. Happy Mailing !!

Setting up Rails 3 stack on an Amazon EC2 instance

Recently, we had an encounter with Rails 3 on EC2 for one of our esteemed clients. The client wanted to host his application on Amazon EC2(and rightly so given that its such an amazing infrastructure at such an affordable cost). After searching the public AMIs, we could not come out with an AMI that had all our requirements satisfied(Rails 3, Postgres, phusion passenger). This lead us to trigger our sys admins to prepare the bare minimum EC2 AMI with the required development stack. We documented the steps if they can be helpful to someone. 1. Choose a bare minimum AMI, we chose centos 5.5 based 64 bit AMI. 2. download the keypair and log into your server with ssh. 3. You need to be root to perform the following steps, so if you log in with ec2-user type in the following command to become the root user:
sudo su -
Install Ruby and Rails 3:
  1. The amazon machine instance(AMI) comes with ruby 1.8.7 installed as of today.
  2. Rails 3 needs ruby 1.9+ to work.
  3. Ruby 1.9.1 is a buggy version so we will go ahead and install ruby 1.9.2
  4. We could not find rpms of ruby 1.9.2 on centos repos or even on rpmforge, so we had to go ahead and install ruby from source.
  5. Download ruby sources from http://www.ruby-lang.org/en/downloads
  6. wget ftp://ftp.ruby-lang.org//pub/ruby/1.9/ruby-1.9.2-p136.tar.gz
  7. unpack it !!
    tar xzvf ruby-1.9.2-p136.tar.gz
  8. Install development tools.
    • yum groupinstall ‘Development Tools’
    • yum install readline-devel
  9. Move to ruby sources and compile
  10. cd ruby-1.9.2-p136
    ./configure
    make && make install
    
  11. Sit back and relax. Ruby will take some time to compile. It took about 7-9 minutes for us on EC2 micro instance
  12. After you are done with ruby install. Install rails.
    gem install rails
Install Apache and Passenger
  • Install Apache 2.
    yum install httpd
  • yum install httpd-devel
  • Install passenger.
    passenger-install-apache2-module
  • In case you get the following error,
    To install OpenSSL support for Ruby:
  • Please (re)install Ruby with OpenSSL support by downloading it fromhttp://www.ruby-lang.org/. Go to ruby source directory and install openssl extension.
Configure Passenger
Configure passenger. Depending upon your passenger version you would need to add following lines to your httpd.conf file. (Please note that these directives would come for you after the passenger install, you just need to copy and paste from there). For me the passenger version was 3.0.2:
LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.2/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.2
PassengerRuby /usr/local/bin/ruby
LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.2/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.2
PassengerRuby /usr/local/bin/ruby
Create virtual host configuration:
<VirtualHost *:80>
ServerName www.yourhost.com
DocumentRoot /somewhere/public    # <-- be sure to point to 'public'!
<Directory /somewhere/public>
AllowOverride all              # <-- relax Apache security settings
Options -MultiViews            # <-- MultiViews must be turned off
</Directory>
</VirtualHost>
Install postgres
yum install postgres
Thats it !! We are all done with our awesome rails 3 stack on Amazon EC2 !!
Please feel free to post any queries in the comments.