jQuery Validate

jQuery Validation plugin is popular and simple plugin for client side validations. It can be customized with plenty of options. It can be easily integrated into your existing forms. The plugin already has set of useful validation methods including URL and email validation that come out of the box. The plugin provides an API to write your own methods. All in built methods comes with default error messages in english. In the blog, I will try to explore a few methods and usage of jQuery Validate Plugin. Plugin Methods This library adds three jQuery plugin methods:
  • validate() – Validates the selected form.
  • valid() – Checks whether the selected form or selected elements are valid.
  •  rules() – Read, add and remove rules for an element
Setting up: You need to download the files and then place dist/jquery.validate.min.js file in #= require jquery.validate.min Applying to the form:
  • You just need to add a class ‘jqvalidate’ to the form that you want to validate. In rails, you will generally do it like following
    <%=  form_for(@account, :html =&gt; { :class =&gt; "jqvalidate" }) do |f| %>
    <% end %>
  • Next you need to create a function that is to be called before the form is loaded:
    $.validate_form = () -&gt;
    cc_number: { required: true, digits: true, minlength: 15, maxlength: 16},
    cc_cvv: { required: true, digits: true, minlength: 3, maxlength: 4}
    ignore: ".jqvalidate_ignore"
    errorClass: "invalid"
  1. addClassRules is used to specify validations on the fields with its in built methods. cc_number and cc_cvv are the classes given to two different fields respectively on which the validations are applied.
  2. ‘.jqvalidate_validate’ class is given to those fields that are not subject to the validations by this plugin.
  3. ‘invalid’ is the class given to the error messages.
The blog post just details the start for the plugin. We encourage you to dig more and explore this great plugin. For more details, refer to the documentation.

Capybara Basics For Automated testing of Ruby on Rails Application

Automated Testing is an integral part of large scale production Ruby on Rails applications. A solid Integration test suite ensures that the integration between various components of the application is smooth and seamless. An integration test generally spans multiple controllers and actions, tying them all together to ensure they work together as expected. It tests more completely than either unit or functional tests do, exercising the entire stack, from the dispatcher to the database. Capybara comes in handy in simulating the testing in real user environment. In this blog we will explore various ways by which we can strengthen our apps with Capybara.

    • Capybara is an integration testing tool for rack based web applications. It simulates how a user would interact with a website.
    • Capybara helps in testing your web application as a real user would do. It is independent of the driver that is running your tests and comes with Rack::Test and Selenium support built in. For other drivers, you need to include their respective gem.
Capybara requires Ruby 1.9.3 or later and can be installed as a gem by listing in your gem file and then installing bundle or it may be installed directly by typing:
gem install capybara
  • Include capybara in your helper file:
    require 'capybara/rails'
Using Capybara with Rspec
Integrating capybara with rspec is fairly simple. Lets go through a step by step process to understand the integration. The following test simulates a login in a real user environment.
  • Include capybara’s rspec directory
    require 'capybara/rspec'
  • Capybara specs are required to be placed in spec/features
  • Here is a simple test to test the login page working.
    describe "Login" do
    before :each do
    user = User.create(:email =&gt; 'RoR@enbake.com'”,:password =&gt; 'password')
    it "logs the user in" do
    visit login_path
    fill_in "Email", with: user.email
    fill_in "Password", with: 'kanwal_enbake'
    click_button "Sign in"
    page.should have_content "success"
    The test source code is fairly simple to understand and implement so I will not be going into the details here.
Testing Javascript with Capybara
  • By default Capybara uses the :rack_test driver, which is fast but limited as it does not support JavaScript. So you need to switch to Capybara.javascript_driver(:selenium by default) by using :js => true. For instance: describe “example of testing js”, :js => true do it “tests js with default js driver” do end it “tests js with one specific driver” , :driver => :webkit do end end
  • To make Capybara available in all test cases deriving fromActionDispatch::IntegrationTest : describe ActionDispatch::IntegrationTest include Capybara::DSL end
  • You may sometimes, want to switch default driver and want to run everything in selenium, you could do:Capybara.default_driver = :selenium
  • In any case – provided you are using Rspec or Cucumber – if you want use faster :rack_test as your default_driver, and use JavaScript-capable driver only for selective test cases, you can use :js => true for Rspec and @javascript for Cucumber. By default, JavaScript tests are run using the :selenium driver. You can change this setting by Capybara.javascript_driver.
  • You can also, particularly in before or after blocks, change driver temporarily: Capybara.current_driver = :webkit
  • Capybara.use_default_driver would switch back to default driver
Note: switching the driver creates a new session, so you may not be able to switch in the middle of a test.
Techniques and Trivial Testing Tasks
  • visit(‘/projects’)
  • visit(post_comments_path(post))
Clicking links and buttons
  • click_link(‘id-of-link’)
  • click_link(‘Link Text’)
  • click_button(‘Save’)
  • click(‘Link Text’) # Click either a link or a button
  • click(‘Button Value’)
Interacting with forms
  • fill_in(‘First Name’, :with => ‘John’)
  • fill_in(‘Password’, :with => ‘Seekrit’)
  • fill_in(‘Description’, :with => ‘Really Long Text…’)
  • choose(‘A Radio Button’)
  • check(‘A Checkbox’)
  • uncheck(‘A Checkbox’)
  • attach_file(‘Image’, ‘/path/to/image.jpg’)
  • select(‘Option’, :from => ‘Select Box’)
  • within(“//li[@id=’employee’]”) do fill_in ‘Name’, :with => ‘Jimmy’ end
  • within(:css, “li#employee”) do fill_in ‘Name’, :with => ‘Jimmy’ end
  • within_fieldset(‘Employee’) do fill_in ‘Name’, :with => ‘Jimmy’ end
  • within_table(‘Employee’) do fill_in ‘Name’, :with => ‘Jimmy’ end
  • page.has_xpath?(‘//table/tr’)
  • page.has_css?(‘table tr.foo’)
  • page.has_content?(‘foo’)
  • page.should have_xpath(‘//table/tr’)
  • page.should have_css(‘table tr.foo’)
  • page.should have_content(‘foo’)
  • page.should have_no_content(‘foo’)
  • find_field(‘First Name’).value
  • find_link(‘Hello’).visible?
  • find_button(‘Send’).click
  • find(‘//table/tr’).click
  • locate(“//*[@id=’overlay'”).find(“//h1″).click
  • all(‘a’).each { |a| a[:href] }
  • current_path.should == post_comments_path(post)
    • result = page.evaluate_script(‘4 + 4′)
    • page.execute_script(“$(‘#id_of_anything’)”).click
Note: unlike page.evaluate_script, page.execute_script do not return anything. You have to explicitly perform action to make it return a value Debugging
  • save_and_open_page
  • page.save_screenshot(‘screenshot.png’) *driver specfic
Asynchronous JavaScript
  • click_link(‘foo’)
  • click_link(‘bar’)
  • page.should have_content(‘baz’)
  • page.should_not have_xpath(‘//a’)
  • page.should have_no_xpath(‘//a’)
XPath and CSS
  • within(:css, ‘ul li’) { … }
  • find(:css, ‘ul li’).text
  • locate(:css, ‘input#name’).value
  • Capybara.default_selector = :css
  • within(‘ul li’) { … }
  • find(‘ul li’).text
  • locate(‘input#name’).value
Calling remote servers
  • Normally Capybara expects to be testing an in-process Rack application, but you can also use it to talk to a web server running anywhere on the internet, by setting app_host: Capybara.current_driver = :selenium Capybara.app_host =https://www.google.com
I hope the blog gives a good introduction of adding capybara tests to your rails apps. Its just the tip of the iceberg and the possibilities are endless. Lets make our apps more robust and flexible .. :).

Embedly Integration in Rails

Embedly is a powerful tool to embed multimedia content or images or even blogs in your application. It comes in handy especially to make your content easier to share.
Embedly provides an easy to use official ruby gem to integrate Embedly into your RoR app. The usage of the gem is fairly straight forward. I will try to go step by step on how to integrate embedly API into an RoR app.
  • Add the gem in your gem file
    gem "embedly"
    Bundle the Gems
  • Get the API key from “https://app.embed.ly/login”. For the purpose of the blog we assume that the API key is accessible through the config variable AppConfig.embedly[:api_key]
  • Mostly the embeds will stay the view side logic so lets create a small helper to generate an embed from a Youtube Video. The call to the API is pretty straight forward and intuitive as we can see from the following code.
  • require 'embedly'
    require 'json'def display
    embedly_api =Embedly::API.new AppConfig.embedly[:api_key]
    obj = embedly_api.oembed :url => 'http://www.youtube.com/watch?v=sPbJ4Z5D-n4&feature=topvideos'
  • The code calls the embedly API to generate an ombed for the youtube url and returns the html for the embed. This html can be used directly to embed the content into your blog or app.
  • Thats it !!

Enjoy supercharged content with your app and feel free to post any comments in the comments section.

Rapportive Alternatives

As LinkedIn pulls the plug on the features that once made Rapportive such a popular extension, we analyzed the various similar extensions in the chrome store and how closely they fulfill the void that Rapportive has left :

Rapporto: One of the fastest dropins for Rapportive, the plugin almost emulates the original Rapportive functionality and matches the UI as well seamlessly.  The chrome extension might not be exactly same as rapportive but it gets sufficiently close to fill the void. However, if you would to see some trusted names behind the product, then Rapporto might not be the way for you. Moreover, the future of Rapporto is not yet certain as its founders are yet to comment on the roadmap for the extension.

Vibe: We have been using Vibe before Rapportive as well but the current changes in Rapportive make it much more viable. Vibe is just not a gmail extension as it is available on a plenty of platforms including Mobile platforms(Android, iPhone etc). You can just hover over any email on any website and Vibe will bring up a nice looking widget presenting the information about the user. The UI is eye candy though it might not be as sleek as Rapportive’s. The application is free for personal use and you will need to contact their sales team if you wish to use the application for business purposes.

Ark Browser Plugin : Ark is a browser plugin which enables you to see contact information from not only gmail but yahoo, hotmail and AOL as well. The plugin goes quite close to the Rapportive in terms of UI and seems to have quite a lot of contacts indexed. The plugin is available on firefox as well if you are not a Chrome Junkie.

360social.me : 360social.me is an extremely robust browser plugin which presents the information quite similar to Rapportive did in its original role, the UI however is not as sleek as Rapportive. The extension is free to use but it comes with a professional plan as well for which they charge nominal subscription fees.

We are sure that one of the above apps will replace your rapportive workflow. If you are already using some other plugin, we look forward to your comments.

A Step by Step Guide to Setup Rails application on Ec2 instance (Ubuntu Server)

Sometimes the Bitnami or other Rails AMIs doesn’t fit your needs directly and you will feel the need of building the Server yourself.Here I go step by step in building such a stack on top of Amaon EC2 Ubuntu Server.
  • Rails applications are a little bit different to install on servers but the process is very easy.Rails application needs a web server and an application server to run with. For development, it comes with default Webrick server that serve as application server on local machine. For setting it up on production server, we have the following choices on Web and application servers :-

    • Web servers

      1. Apache

      2. Nginx

    • Application Servers

      1. Passenger

      2. Thin

      3. Puma

      4. Unicorn

  • The simplest and best combination consists of Nginx + Passenger. It allows greater flexibility for configuration and also allows good speed over other combinations. So we are going to setup an Rails application using Nginx + passenger configuration on a bare Ubuntu server. Here are the steps :-

  1. Launch an Ec2 instance with ubuntu AMI. Make sure you have HTTP and SSH access to the server.

  2. SSH into the server by using private key (.pem) used while launching the instance and install the available updates by running :-

    sudo apt-get install updates
  3. Now you need to setup ruby on your server, so install the single user rvm ruby by following this blog.
  4. Load the rvm and make the installed ruby as default by running the following commands :-

    source ~/.rvm/scripts/rvm
    rvm use 2.1.0 –default
  5. Install the version control to clone your rails application to server. We generally use Git with rails application which can be installed by running the following command :-

    sudo apt-get install git
  6. Now clone your application on the server :-

    git clone yourepo.git

    Note:- In case of private git repository, you need to add public key of server to deploy keys of your repository, otherwise you will be promped with an permission denied error.


    Deploy using application to this server using Capistrano script. Please read this blog for more details on deploying your application using Capistrano.

  7. Now go to your application and install the gems by running bundle install command. If you want to setup your database on the same server, you can do the same by using the following commands :-|

    • In case of MYSQL

      sudo apt-get install mysql-server mysql-client
      sudo apt-get install libmysql++-dev
    • In case of POSTGRESQL, follow this blog for installation and then install the development headers

      sudo apt-get install libpq-dev

      After setting this up, migrate your databases in whichever environment you want to launch the server.

  8. Now install the Passenger gem by running :-

    gem install passenger

  9. Next step is to install the Nginx server, but we have some pre-requisits for this.

      1. It needs curl development headers which can be installed by :-

        sudo apt-get install libcurl4-openssl-dev

      2. It will be installed under /opt directory and your user should have permissions to that folder, so make your user as user owner for /opt directory by :-

        sudo chown -R ubuntu /opt

  10. Now install the Nginx server with passenger extension by running the following command :-


  11. Set your Nginx server as service in init script by using the following commands :-

    wget -O init-deb.sh http://library.linode.com/assets/660-init-deb.sh
    sudo mv init-deb.sh /etc/init.d/nginx
    sudo chmod  +x /etc/init.d/nginx
    sudo /usr/sbin/update-rc.d -f nginx defaults
  12. Setup your application path in the nginx configuration file i.e. /opt/nginx/conf/nginx.conf.

    server {

    listen 80;

    server_name localhost;

    root /home/ubuntu/my_application/public #<-- be sure to point to 'public'

    passenger_enabled on;

    rails_env production;


  13. Lastly start your server by running the following command :-

    sudo service nginx start


Braintree Integration In Rails Application

Braintree is one gateway that is getting a lot of traction and user love these days due to its ease of use and getting started easily. I have indicated here a few things that should get you started with braintree in rails quickly. In this Blog, we’ll securely process a credit card transaction using the official braintree gem and the Client-side encryption method, utilizing Braintree.js library. The blog covers the case where the form to accept the payments is rendered on the client side only.     Note: Before starting you’ll need to sign up for a Sandbox account. That will provide us with the required braintree API keys. There are two parts to the problem. First one is setting up the form and securing the data to be sent to the server. The next is to process the transaction the server side. Payment Form A simple payment form captures the credit card information along with amount of payment to be made. Here goes a very simple HTML form which asks for all the required info:
<h1>Braintree Credit Card Transaction Form</h1>
<div><form id="braintree- payment-form" action="/create_transaction" method="POST"><label>Card Number</label>
<input type="text" autocomplete="off" size="20" data-encrypted-name="number" />

<input type="text" autocomplete="off" size="4" data-encrypted-name="cvv" />

<label>Expiration (MM/YYYY)</label>
<input type="text" name="month" size="2" /> / <input type="text" name="year" size="4" />

<input id="submit" type="submit" /></form></div>
Next up, we need to encrypt the payment that is being sent from client side to our server for processing. Braintree provides a nice JS library to encrypt the data that is being sent from your form. The integration is as simple as including the library and calling a do it all method.
<script type="text/javascript" src="https://js.braintreegateway.com/v1/braintree.js"></script><script type="text/javascript">// <![CDATA[
var braintree = Braintree.create("YourClientSideEncryptionKey"); braintree.onSubmitEncryptForm('braintree-payment-form');
// ]]></script>
As you can see, we just did couple of steps to secure the data being sent to the server.
  • We initialized the Braintree.js with client-side encryption key.
  • We then called the Braintree.js onSubmitEncryptForm method with the id of the form to encrypt the form before sending it to the server.
Thats it, your data is being securely sent to your server now for processing. Lets move on to the server side of the things. First up install the official braintree gem in your Gemfile. Setting up Rails Controller to commit a transaction with Braintree
  • Every Braintree user receives a unique set of API keys. These need to available in your application.
  • To retrieve the keys, first sign in your Sandbox account.
  • On the sandbox home page, you can get the API keys required to execute transactions with BrainTree.
  • The code will look like this(for sandbox environment)
  • 	Braintree::Configuration.environment = :sandbox
    	Braintree::Configuration.merchant_id = “use_your_merchant_id”
    	Braintree::Configuration.public_key = “use your public key”
    	Braintree::Configuration.private_key = “use your private key”
  • Next Up, lets call up the API call to make the transaction. Braintree gem comes with exhaustive set of method calls to interact with the gateway. Here we initiate a sale with Braintree.
  • 	result = Braintree::Transaction.sale(:amount => “1000.00”,
    :credit_card => {:number => params[:number], :cvv => params[:cvv], :expiration_month => params[:month], :expiration_year => params[:year]},
    :options => {:submit_for_settlement => params[:settlement]})
  • Thats it !! The result object will tell us whether initiating the payment succeeded or not. The object comes loaded with methods to test the result like result.success? and result.failed?
This is not the end to the whole payment process in Braintree. Braintree will send calls/hooks informing your application about the various payment states but that is out of scope for the purpose of this blog and will better taken up as a post for separate. I hope the blog helps you get up with Braintree Quickly. Please feel free to drop in comments if any.

S3cmd : A commandline tool to manage data on Amazon S3

S3cmd is a tool for managing data on Amazon S3 storage. It is a simple and easy to use command line tool through which user can easily access s3 data. It is also ideal for scripts, automated backups triggered from cron, sync etc. s3cmd is an open source project and is free for both commercial and private use. You will only have to pay Amazon for using the s3 storage, and no money is needed to pay for using s3cmd tool. Installation: -The installation of s3cmd is very simple and comprises of single command. sudo apt-get update && sudo apt-get install s3cmd Configure s3cmd: - To access s3 storage using s3cmd tool you need to configure your s3cmd tool with S3 ACCESSKEYID, SECRETACCESSKEY. So, to do this type the command
s3cmd --configure
And just supply your s3 creds when prompt to do so. You do not need to configure other options just leave it blank for if you don’t know about it, and save your changes when prompted. This will create a .s3cfg file in your root directory containing the information to access the AWS storage. The disadvantage of using s3cmd is that at one time you can use only one account. The disadvantage of using s3cmd is that at one time you can use only one account and to change the account you have to reconfigure your s3cmd by using the “s3cmd –configure” command. Access: To test whether s3cmd is working fine or not, just type the following command.
s3cmd ls
It will show all the buckets you are having on the s3. And to access a particular bucket just use the path of that bucket. For example you have a bucket with path ‘s3://fh-assets’. Just use:
s3cmd ls s3://fh-assets
This will list the content of this bucket only You can easily read from or write to s3 storage using s3cmd get/put options. GET is for reading and PUT is for writing. For example we have a file abc.pdf on s3 and we want to retrieve that to local system. Just use the following command.
s3cmd get s3://fh-assets/abc.pdf 123.pdf
This will copy abc.pdf in the current directory with a name 123.pdf And to put a file into s3 use the following command
s3cmd put 123.pdf s3://fh-assets/abc.pdf
Similarly if you want to copy a folder just use -r . SYNC:This command is used to sync local directory with buckets.for example:
s3cmd sync local/directory bucket_name
Here local/directory is the source and bucket_name is destination in which the modifications occur. We can use sync command to both upload and download from s3. Just we need to change the source and the destination. There are various options available with sync option, but before moving forward we need to see that what types of transfers s2cmd performs, as this will help us to understand s3cmd sync better. In s3cmd there are basically two modes of transfer i.e.
  • Unconditional transfer — In this all matching files are uploaded to S3 or downloaded back from S3. This is similar to a standard unix cp command.
  • Conditional transfer — In this, only those files that don’t exist at the destination in the same version are transferred by the s3cmd sync command. By default a md5 checksum and file size is compared. This is similar to a unix rsync command Now continuing with options that are available with s3cmd are:-
  • –dry-run – > In this run we’ll first check with —dry-run to see what would be uploaded.
  • –skip-existing – > if you don’t want to compare checksums and sizes of the remote vs local files and only want to upload those that are new.
  • –delete-removed -> to get a list of files that exist remotely but are no longer present locally.
Enjoy the handy new tool in your kitty !!

S3FS : Mounting Amazon S3 as a filesystem

S3FS (Simple Storage Services File System) is a FUSE based file system backed by Amazon S3 storage buckets, and if mounted once, S3 can be used just like a drive in our local system. This helps a lot if you have an app which interacts very frequently with Amazon S3. Follow these steps to mount a bucket from s3 to local system using S3fs in UBUNTU 1. First of all you need to install the libraries that are needed by s3fs to run on the system. To install those libraries run the following command in console sudo aptitude install build-essential libcurl4-openssl-dev libxml2-dev libfuse-dev comerr-dev libfuse2 libidn11-dev libkadm55 libkrb5-dev libldap2-dev libselinux1-dev libsepol1-dev pkg-config fuse-utils sshfs 2. After that Download the appropriate version of s3fs you want to use by the following link In this example we are using s3fs-1.68, and can be downloaded from this link 3. After downloading the s3fs go to that path in your terminal and run the following commands
  • tar xvzf s3fs-1.68.tar.gz
  • cd s3fs-1.68/
  • ./configure –prefix=/usr
  • make
  • make install (as root)
4. After this you need to supply your s3 credentials to s3fs to mount the bucket. There are different ways of doing that, you can choose whichever is suitable to you
  • By setting the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables by using export command in UBUNTU. export AWSACCESSKEYID=******************* export AWSSECRETACCESSKEY=xxxxxxxxxxxx
  • By using a .passwd-s3fs file in your home directory. In this you just need to create the .passwd-s3fs file in home directory, and add the creds for s3 into it. The s3fs password file has this format (use this format if you have only one set of credentials): accessKeyId:secretAccessKey If have more than one set of credentials, then you can have default credentials as specified above, but this syntax will be recognized as well: bucketName:accessKeyId:secretAccessKey
  • By using the system-wide /etc/passwd-s3fs file. In this you just need to create the passwd-s3fs file in /etc directory, and add the creds for s3 into it similar to the last point.
5. After that change the permissions of the password file
  • If you are using ~/.passwd-s3fs then permission should be 600
  • If you are using /etc/passwd-s3fs then permission should be 640
6. And last, to mount bucket(mybucket) to specific directory(/path/to/directory/) s3fs mybucket /path/to/directory/ 7. To unmount the mounted drive use this command. fusermount -u /path/to/directory/ 8. To automount s3 bucket to every time to local system after boot, edit the /etc/fstab file and add this at the end of the file s3fs#bucket_name /path/to/directory/ passwd_file=/path/to/passwd/file And after this edit /etc/rc.local and add the following line before exit 0. mount -a 9. If you get any error/warning during system boot mounting just add the option in the current command s3fs#bucket_name /path/to/directory/ fuse _netdev,passwd_file=/path/to/passwd/file This command will omit the warnings and mount the bucket to your local system. Now you can access Amazon S3 just like another folder on your file system.

Streaming Videos in an Android Application

The blog details on how we can stream videos in an android application. There are basically two parts two parts to the problem:
  1. A streaming server like adobe flash media which can deliver a video stream
  2. Playing the video progressively in your android application with help of a video player
The blog assumes that you already have a file uploaded at some place from where the player can pick up the file to play. The simple and the easiest solution is to use the android VideoView but the problem with the VideoView is it requires the whole file to be downloaded before starts playing. It can be very annoying to serve the videos like this in a real time production application. If however, you are happy with this behavior, you can skip the rest of the blog. Vitamio player Vitamio for android is an android library with a VideoView similar to android’s native VideoView. Vitamio’s VideoView can be used to play or stream the videos. To integrate vitamio,  its library for android needs to be included in the android project. The library is available at the following URL: https://github.com/yixia/VitamioBundle Using Vitamio with an existing Android project After downloading the Vitamio bundle add it into your android project as library. We will now start to build a simple Vitamio VideoView to play the videos Layout In layout xml file add the following lines to add videoView

 android:layout_height="match_parent" >

Activity In your activity file import vitamio’s videoView (io.vov.vitamio.widget.VideoView) instead of android native VideoView (android.widget.VideoView). The following sample code demonstrates how you can create a VideoView to stream a video within the Vitamio VideoView.

VideoView mVideoView = (VideoView)findViewById(R.id.video_view);

 //Setting video path(url)
 videoView = mVideoView.setVideoPath(“<Video url>”);

 //Setting main focus on video view

 //Initializing the video player’s media controller.
 MediaController controller = new MediaController(this);

 //Binding media controller with VideoView


 //Registering a callback to be invoked when the media file is loaded and ready to go.
 mVideoView.setOnPreparedListener(new OnPreparedListener() {

 public void onPrepared(MediaPlayer arg0) {
 //Starting the player after getting information from url.

Thats it !! You are now ready to stream videos within the vitamio video viewer. There are a lot of options that you can explore within the viewer in the vitamio documentation which is available on their site.

Integrating Google maps API v2 with Android application

Google Maps Android API v2 The Google Maps Android API v2 allows us integrate interactive, feature-rich Google maps to our Android application. Advantage of API v2 over API v1
  • The Maps API now uses vector tiles. Their data representation is smaller, so maps appear in your apps faster, and use less bandwidth.
  • Caching is improved, so users will typically see a map without empty areas.
  • Maps are now 3D. By moving the user’s viewpoint, you can show the map with perspective.
Pre-Requisites for integrating Google Maps API v2 into my Android application
  • The API is distributed as part of the Google Play services SDK, which can be downloaded with the Android SDK Manager. To use the Google Maps Android API v2 in your application, we first need to install the Google Play services SDK.Installing Google Play Services In Eclipse, choose Window > Android SDK Manager. In the list of packages that appears scroll down to Extras folder and expand it. Select the Google Play services checkbox and install the package.
  • Next, to use Google Maps we need to create a valid Google Maps API key. The key is free, we get this key via Google APIS Console. We have to provide our application signature key and the application package name in order to get the Google Maps API key
What is Application signature key? The Maps API key is based on a short form of your application’s digital certificate, known as its SHA-1 fingerprint. The fingerprint is a unique text string generated from the commonly-used SHA-1 hashing algorithm. Because the fingerprint is itself unique, Google Maps uses it as a way to identify our application. Generating SHA1 Key Find .android directory on your device : This directory is under your home directory. Windows Vista/7: windows installation drive: (C,DE, whatever)\Users\\.android OS X and Linux: ~/.android Find debug Keystore File called debug.keystore lies in .android directory To create the SHA-1 for your debug keystore you use the keytool command from your JDK installation pointing to the debug.keystore file. Command:
keytool -list -v -alias androiddebugkey \-keystore &lt;path_to_debug_keystore&gt;debug.keystore \-storepass android -keypass android
Now follow following documentation to generate API key https://developers.google.com/maps/documentation/android/start#obtaining_an_api_key
  • Now you have installed Google play services library and generated an API key for you app lest now move to actual process to integrate a google map
Steps to Integrate Google maps to an Android application
  • Create a new android project in Eclipse, with package name that you have registered with Google console to get API key
  • Import Google play Services library to your workspace Browse to /extras/google/google_play_services/libproject/google-play-services_lib and select google-play-services_lib
  • Add this Project to your android app as -> Properties -> Android -> Library, Add -> google-play-services_lib into your project
  • Code in MainActivity should look like this
  • public class MainActivity extends Activity {
     protected void onCreate(Bundle savedInstanceState) {
  • Maps are now encapsulated in the MapFragment class, an extension of Android’s Fragment class.Now you can add a map as a piece of a larger Activity.With a MapFragment object, you can show a map by itself on smaller screens, such as mobile phones, or as a part of a more complex UI on larger-screen devices, such as tablets.
What is MapFragment? It is a Map component in an app. This fragment is the simplest way to place a map in an application. It’s a wrapper around a view of a map to automatically handle the necessary life cycle needs. Being a fragment, this component can be added to an activity’s layout file simply with the XML below.

<?xml version="1.0" encoding="utf-8"?>
 <fragment xmlns:android="http://schemas.android.com/apk/res/android"
 android:layout_height="match_parent" android:name="com.google.android.gms.maps.MapFragment"/>

so we need to replace content ofres/layout/activity_main.xml with code above
  • Add the following tag into your AndroidManifest.xml just before the closing tag From here, the Maps API reads the key value and passes it to the Google Maps server, which then confirms that you have access to Google Maps data.
    <meta-data android:name="com.google.android.maps.v2.API_KEY"android:value="<strong>your_api_key</strong>"/>
  • Your application now also needs the following permissions in the AndroidManifest.xml
    <permission android:name="your_package_name.permission.MAPS_RECEIVE" android:protectionLevel="signature"/>
     <uses-permission android:name="your_package_name.permission.MAPS_RECEIVE"/>
     <uses-permission android:name="android.permission.INTERNET"/>
  • Maps v2 uses OpenGl so the following uses-feature is also required <uses-feature android:glEsVersion=”0x00020000″ android:required=”true”/>
  • Thats it !! Build and run your application.
  • You should see a map. If you don’t see a map, please confirm if you have missed out any of the configurations.