Over a million developers have joined DZone.

MySQL and Ubuntu 12.10

DZone's Guide to

MySQL and Ubuntu 12.10

· Performance Zone ·
Free Resource

SignalFx is the only real-time cloud monitoring platform for infrastructure, microservices, and applications. The platform collects metrics and traces across every component in your cloud environment, replacing traditional point tools with a single integrated solution that works across the stack.

I recently had reason to be shunting large amounts of MySQL data around on my Ubuntu 12.10 powered laptop. The data was exported from MySQL in .sql files, and they had a file size in excess of 50Mb. That ruled out PHPMyAdmin for re-importing the data. Here is what I did to solve the problem.

As far as I am aware, there isn’t a software application available for Ubuntu that provides a GUI environment for this kind of task. I know of Navicat, but it is expensive ($129/£83 for the standard edition) for an app that still only runs in Wine. MySQL databases

It’s not MySQL and PHPMyAdmin, it’s me

I am NOT complaining about PHPMyAdmin. It’s a great piece of work, and I use it for most of my querying work on my Ubuntu machine. It just can’t handle huge data imports without a lot of configuration changes. And I really don’t like configuring things. Well, not if there is another (simpler) solution available.

So, the problem…

I had several large .sql files available on the local network that I wanted to import into my local copy of MySQL. The data was an export from a production database that was going to allow me to test some coding changes in as realistic a way as possible.

I just need to get that data in…

The solution – hello command line

If you read that heading and involuntarily convulsed, fear not. This is going to be (mostly) painless. I’m going to assume that you have MySQL installed via synaptic; a standard installation in other words.


Open up the terminal window and do:

mysql -u username -p -h localhost

That is how you connect. Simple as that. It’s not too bad is it? Mind you, the cursor will be flashing at you, so you might be wondering what to do next.

While this isn’t meant to be a full tutorial on using MySQL from the command line, here are a few basic things.

Choosing a database to use

That is an easy one:

use testing

After you have connected, that will make sure you are now working with the database called testing. To see what databases you have do:

show databases

You can create a database and some tables too

This shouldn’t come as a surprise – of course you can. How though? Something like this:

use test_db;

Now we are using our new database. It doesn’t have any tables though, so let’s fix that:

CREATE TABLE `notes` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  `title` varchar(200) DEFAULT NULL,
  `note` text,
  PRIMARY KEY (`id`)

This creates a simple table in the test_db database. We can now take a look at the table we just created:

mysql> describe notes;
| Field | Type             | Null | Key | Default | Extra          |
| id    | int(11) unsigned | NO   | PRI | NULL    | auto_increment |
| title | varchar(200)     | YES  |     | NULL    |                |
| note  | text             | YES  |     | NULL    |                |
3 rows in set (0.04 sec)


I think that is pretty painless so far. Next, we’ll deal with the process of importing some data.

Getting data in

Typing “quit” allows you to leave the MySQL monitor so do that now. I have found that the import works best by running a connection and import command together. It looks like this:

mysql -u root -p -h localhost test_db < /path/to/file/world.sql

Depending on the size of the file, nothing will happen for a while, but then the prompt will return. That’s it, job done.

Make sure you are connected and using the the test_db then do:

| Tables_in_test_db |
| City              |
| Country           |
| CountryLanguage   |
| notes             |
4 rows in set (0.00 sec)

Then you will be able see the table(s) you just imported. By the way, I was using the world sample database for MySQL availble from here. It’s not a particularly large file, but it will give you the idea.


So in the end the command line is the best way to work with large .sql files on Ubuntu. I suspect that might the case for Linux in general. The process is not too difficult though, and there is something quite refreshing about just banging in a few commands and getting things done.

SignalFx is built on a massively scalable streaming architecture that applies advanced predictive analytics for real-time problem detection. With its NoSample™ distributed tracing capabilities, SignalFx reliably monitors all transactions across microservices, accurately identifying all anomalies. And through data-science-powered directed troubleshooting SignalFx guides the operator to find the root cause of issues in seconds.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}