Connecting an Autonomous Data Warehouse With Python
In this article, take a look at how to connect an autonomous data warehouse with Python and see a machine learning example.
Join the DZone community and get the full member experience.Join For Free
In this article, I will connect to an Oracle database running in the cloud (Oracle Autonomous Data Warehouse) and make a simple regression application in python environment with a sample data taken from here.
First of all, I will make this application in Autonomous Data Warehouse (DB) which is offered as a service in Oracle Cloud. All I need is an Oracle Cloud account. You can get Autonomous Data Warehouse service, which is one of the services of Always Free (Oracle Free Tier), free of charge and you can provision and use it on the Cloud in minutes without any installation. You can follow the link for detailed information.
You can start Autonomous Data Warehouse service via Oracle Cloud Infrastructure as shown in the video below.
The second component I need is to install the cx_oracle package in python to connect to the Oracle database in the Cloud from my local environment. The next step is to install an Oracle Client on my machine.
You may also like: Data Warehouses: Past, Present, and Future
You can do the above mentioned installations by following the link.
I need a data set and a problem to implement it. I choose Boston Housing Prices as a problem. To solve this problem, I will construct a regression model. I get the data set from Kaggle (Boston Housing).
Let's first examine the BOSTON_HOUSING dataset.
|Column Name||Description||Data Type|
|crim||per capita crime rate by the town.||Number|
|zn||the proportion of residential land zoned for lots over 25,000 sq.ft.||Number|
|indus||the proportion of non-retail business acres per town.||Number|
|chas||Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).||Number|
|nox||nitrogen oxides concentration (parts per 10 million).||Number|
|rm||average number of rooms per dwelling.||Number|
|age||the proportion of owner-occupied units built before 1940.||Number|
|dis||the weighted mean of distances to five Boston employment centers.||Number|
|rad||index of accessibility to radial highways.||Number|
|tax||full-value property-tax rate per $10,000.||Number|
|ptratio||the pupil-teacher ratio by the town.||Number|
|black||1000(Bk – 0.63)^2 where Bk is the proportion of blacks by the town.||Number|
|lstat||lower status of the population (percent).||Number|
|medv||the median value of owner-occupied homes in $1000s.||
Now that we have reviewed the details with our dataset, let's load the BOSTON_HOUSING that we downloaded to our Oracle database.
First, create the Oracle table in which we will load the data set (train.csv) that we downloaded.
Now that we have created our table, we will load the dataset we downloaded as CSV into the table; we have multiple methods to do this:
- Using Oracle External Table.
- Using Oracle SQL Loader.
- Using SQL-PL/SQL editors (Oracle SQL Developer, Toad, PL/SQL Developer, etc).
I will load the data set with the help of the editor I use. I use Oracle SQL Developer as an editor. With Oracle SQL Developer, you can load data as follows.
We have completed the dataset loading process.
When we observe the data, we see the details according to the various characteristics of the houses. Each row contains information on the specific characteristics of the house. Our basic parameters for regression analysis are as presented in this table. In this table, we predict the result of the regression analysis. The MEDV column is the target variable that we will use in this analysis.
Now let's start writing the necessary code on the Python side.
With cx_oracle, we connected to ADW, and from there we extracted the data from the relevant tables into the data frame via python.
Now let's create and test our model with the data we have.
Now let's take a test record and make an estimate using the model we created for this record.
Now let's update the MEDV value of the estimated record in the database.
Yes, as we have seen, we extracted data from the Cloud database with Python and then used it in the training of the model we built with sklearn and tested it with a new value.
Opinions expressed by DZone contributors are their own.