Intro to Google BigQuery
Join the DZone community and get the full member experience.
Join For Freethis time i write about google bigquery, a service that google made publicly available in may, 2012 . it was around for some time, some google research blog talked about it in 2010, then google announced a limited preview in november, 2011 and eventually it went live this month.
the technology is based on dremel , not mapreduce. the reason for having an alternative to mapreduce is described in the dremel paper: “ dremel can execute many queries over such data that would ordinarily require a sequence of mapreduce … jobs, but at a fraction of the execution time. dremel is not intended as a replacement for mr and is often used in conjunction with it to analyze outputs of mr pipelines or rapidly prototype larger computations “.
so what is bigquery? as it is answered on google bigquery website: “ google bigquery is a web service that lets you do interactive analysis of massive datasets—up to billions of rows.”
getting started with bigquery
in order to be able to use bigquery, first you need to sign up for it via google api console . once that is done, you can start using the service. the easiest way to start with is bigquery browser tool.
bigquery browser tool
when you first login to bigquery browser tool, you see the following welcome message:
there is already a public dataset available, so you can have a quick look around and experience how to use bigquery browser tool. e.g. here is the schema of github_timeline table, a snapshop from github archive:
you can run a simple query using compose query from the browser tool, the syntax is sql-like:
select repository_name, repository_onwer, repository_description from publicdata:samples.github_timeline limit 1000;
so far so good… let us create now our own tables. the dataset that i was using is from worldbank data catalogue and these are gdp and population data for the countries all over the world. these are available in csv format (as well as excel and pdf).
as a first step, we need to create the dataset – dataset is basically one or more tables in bigquery. you need to click on the down-arrow icon, next to the api project and select “create new dataset”.
then you need to create the table. click on the down-arrow for the dataset ( worldbank in our case) and select “create new table”
then you need to define table parameters such as name, schema and source file to be uploaded. note: internet explorer 8 does not seem to support csv file upload (“”file upload is not currently supported in your browser.” message occurs for file upload link). you’d better go with chrome that supports csv file upload.
when you upload the file, you need to specify the schema in the following format: county_code:string,ranking:integer,country_name:string,value:integer
there are advanced option available, too: you can use e.g tab separated files instead of comma separated ones, you can defined how many invalid rows are accepted, how many rows are skipped, etc.
during the upload, the data is validated against the specified schema, if that is violated, then you will get error messages in the job history. (e.g. ”too many columns: expected 4 column(s) but got 5 column(s)” )
once the upload is successfully finished, you are ready to execute queries on the data. you can use compose query for that, as we have already descibed for the github_timeline table. to display the top 10 countries having the highest gdp values, you run the following query:
select country_name, value from worldbank.gdp order by value desc limit 10
bigquery command line tool
that was easy but we are hard-core software guys, aren’t we? we need command line, not just browser based functionality! relax, there is bigquery command line tool , written in python.
you can download it from here and install it by unzipping the file.
to install it, you just run: python setup.py install
i used bigquery command line tool from a windows 7 machine, the usage is very same on linux with the exception of where the credentials are stored in your local computer. (that could be ~/.bigquery.v2.token and ~/.bigqueryrc in case of linux and %userprofile%\.bigquery.v2.token and %usrprofile%\.bigqueryrc in case of windows).
when you run it at the first time it needs to be authenticated via oauth2.
c:\bigquery\bigquery-2.0.4>python bq.py shell ****************************************************************** ** no oauth2 credentials found, beginning authorization process ** ****************************************************************** go to the following link in your browser: https://accounts.google.com/o/oauth2/auth?scope=https%3a%2f%2fwww.googleapis .com%2fauth%2fbigquery&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response _type=code&client_id=123456789.apps.googleusercontent.com&access_type=offline enter verification code: ********* authentication successful. ************************************************ ** continuing execution of bigquery operation ** ************************************************ welcome to bigquery! (type help for more information.) bigquery> ls projectid friendlyname -------------- -------------- 190120083879 api project bigquery> exit
so at the first time, you need to go the the given url with your browser, allow access to bigquery command line tool and copy&paste the generated verification code at the ”enter verification code” prompt. then it will be stored on your local machine, as mentioned above and you do not need to allow access from then on. (unless you want to initialize the entire access process)
so at the second attempt to run the bigquery shell it will go flawless without authentication:
c:\bigquery\bigquery-2.0.4>python bq.py shell welcome to bigquery! (type help for more information.) bigquery> ls projectid friendlyname -------------- -------------- 190120083879 api project bigquery> ls 190120083879 datasetid ----------- worldbank bigquery> exit goodbye.
to check the schema for gdp and population tables (population table has the same schema as gdp and was also uploaded the same way as gdp- via bigquery browser tool):
c:\bigquery\bigquery-2.0.4>python bq.py show 190120083879:worldbank.gdp table 190120083879:worldbank.gdp last modified schema total rows total bytes ----------------- ------------------------- ------------ ------------- 13 may 12:10:33 |- county_code: string 195 6265 |- ranking: integer |- country_name: string |- value: integer c:\bigquery\bigquery-2.0.4>python bq.py show 190120083879:worldbank.population table 190120083879:worldbank.population last modified schema total rows total bytes ----------------- ------------------------- ------------ ------------- 13 may 12:14:02 |- county_code: string 215 7007 |- ranking: integer |- country_name: string |- value: integer
to check the first 10 rows in population table (you may notice that the values are ordered, it is because that values were already ordered in the worldbank csv file):
c:\bigquery\bigquery-2.0.4>python bq.py head -n 10 190120083879:worldbank.popula tion +-------------+---------+--------------------+---------+ | county_code | ranking | country_name | value | +-------------+---------+--------------------+---------+ | chn | 1 | china | 1338300 | | ind | 2 | india | 1224615 | | usa | 3 | united states | 309349 | | idn | 4 | indonesia | 239870 | | bra | 5 | brazil | 194946 | | pak | 6 | pakistan | 173593 | | nga | 7 | nigeria | 158423 | | bgd | 8 | bangladesh | 148692 | | rus | 9 | russian federation | 141750 | | jpn | 10 | japan | 127451 | +-------------+---------+--------------------+---------+
in order to run a select query against a table, first you need to initialize the project, so you have to have the .bigqueryrc properly configured:
c:\users\istvan>type .bigqueryrc project_id = 190120083879 credential_file = c:\users\istvan\.bigquery.v2.token dataset_id = worldbank c:\users\istvan>
then you can run:
c:\bigquery\bigquery-2.0.4>python bq.py query "select country_name, value from w orldbank.gdp order by value desc limit 10" waiting on job_5745d8eb41cf489fbf6ffb7a3bc3487e ... (0s) current status: running waiting on job_5745d8eb41cf489fbf6ffb7a3bc3487e ... (0s) current status: done +----------------+----------+ | country_name | value | +----------------+----------+ | united states | 14586736 | | china | 5926612 | | japan | 5458837 | | germany | 3280530 | | france | 2560002 | | united kingdom | 2261713 | | brazil | 2087890 | | italy | 2060965 | | india | 1727111 | | canada | 1577040 | +----------------+----------+
bigquery api
bigquery browser tool and command line tool could do in most of the cases. but hell, aren’t we even thougher guys – master of the apis? if yes, google bigquery can offer apis and bigquery client libraries for us, too. these can be in python, java, .net, php, ruby, objective-c, etc, etc.
here is a python application that runs the same select query that we used from browser tool and command line:
import httplib2 import sys import pprint from apiclient.discovery import build from apiclient.errors import httperror from oauth2client.client import accesstokenrefresherror from oauth2client.client import oauth2webserverflow from oauth2client.file import storage from oauth2client.tools import run flow = oauth2webserverflow( client_id='123456789.apps.googleusercontent.com', client_secret='*************', scope='https://www.googleapis.com/auth/bigquery', user_agent='bq/2.0') # run a synchronous query def runsyncquery (service, projectid, datasetid, timeout=0): try: print 'timeout:%d' % timeout jobcollection = service.jobs() querydata = {'query':'select country_name, value from worldbank.gdp order by value desc limit 10;', 'timeoutms':timeout} queryreply = jobcollection.query(projectid=projectid, body=querydata).execute() jobreference=queryreply['jobreference'] # timeout exceeded: keep polling until the job is complete. while(not queryreply['jobcomplete']): print 'job not yet complete...' queryreply = jobcollection.getqueryresults( projectid=jobreference['projectid'], jobid=jobreference['jobid'], timeoutms=timeout).execute() pprint.pprint(queryreply) except accesstokenrefresherror: print ("the credentials have been revoked or expired, please re-run" "the application to re-authorize") except httperror as err: print 'error in runsyncquery:', pprint.pprint(err.content) except exception as err: print 'undefined error' % err def main(): # if the credentials don't exist or are invalid, run the native client # auth flow. the storage object will ensure that if successful the good # credentials will get written back to a file. storage = storage('c:\users\istvan\.bigquery.v2.token') # choose a file name to store the credentials. credentials = storage.get() if credentials is none or credentials.invalid: credentials = run(flow, storage) # create an httplib2.http object to handle our http requests and authorize it # with our good credentials. http = httplib2.http() http = credentials.authorize(http) service = build("bigquery", "v2", http=http) # now make calls print 'make call' runsyncquery(service, projectid='190120083879', datasetid='worldbank') if __name__ == '__main__': main(
the output will look like this:
c:\bigquery\pythonclient>python bq_client.py make call timeout:0 job not yet complete... job not yet complete... job not yet complete... {u'etag': u'"6wedxp58pwcuv91kolrb8l7rm_a/69kaovehho4pbtqit7nlzybfipc"', u'jobcomplete': true, u'jobreference': {u'jobid': u'job_9a1c0d2bcf9443b18e2204d1f4db476a', u'projectid': u'190120083879'}, u'kind': u'bigquery#getqueryresultsresponse', u'rows': [{u'f': [{u'v': u'united states'}, {u'v': u'14586736'}]}, {u'f': [{u'v': u'china'}, {u'v': u'5926612'}]}, {u'f': [{u'v': u'japan'}, {u'v': u'5458837'}]}, {u'f': [{u'v': u'germany'}, {u'v': u'3280530'}]}, {u'f': [{u'v': u'france'}, {u'v': u'2560002'}]}, {u'f': [{u'v': u'united kingdom'}, {u'v': u'2261713'}]}, {u'f': [{u'v': u'brazil'}, {u'v': u'2087890'}]}, {u'f': [{u'v': u'italy'}, {u'v': u'2060965'}]}, {u'f': [{u'v': u'india'}, {u'v': u'1727111'}]}, {u'f': [{u'v': u'canada'}, {u'v': u'1577040'}]}], u'schema': {u'fields': [{u'mode': u'nullable', u'name': u'country_name', u'type': u'string'}, {u'mode': u'nullable', u'name': u'value', u'type': u'integer'}]}, u'totalrows': u'10'} c:\bigquery\pythonclient>
if you want to delve into bigquery api, here is the link to start.
Published at DZone with permission of Istvan Szegedi, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Hyperion Essbase Technical Functionality
-
Guide To Selecting the Right GitOps Tool - Argo CD or Flux CD
-
13 Impressive Ways To Improve the Developer’s Experience by Using AI
-
How To Backup and Restore a PostgreSQL Database
Comments