DZone
Database Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Database Zone > More on Geo Location and Spacial Searches with RavenDB: Importing Data

More on Geo Location and Spacial Searches with RavenDB: Importing Data

Oren Eini user avatar by
Oren Eini
·
Jun. 21, 12 · Database Zone · Interview
Like (0)
Save
Tweet
2.63K Views

Join the DZone community and get the full member experience.

Join For Free

the following are sample from the data sources that maxmind provides for us:

image

image

the question is, how do we load them into ravendb?

just to give you some numbers, there are 1.87 million blocks and over 350,000 locations.

those are big numbers, but still small enough that we can work with the entire thing in memory. i wrote a quick & ugly parsing routines for them:

public static ienumerable<tuple<int, iprange>> readblocks(string dir)
{
    using (var file = file.openread(path.combine(dir, "geolitecity-blocks.csv")))
    using (var reader = new streamreader(file))
    {
        reader.readline(); // copy right
        reader.readline(); // header

        string line;
        while ((line = reader.readline()) != null)
        {
            var entries = line.split(',').select(x => x.trim('"')).toarray();
            yield return tuple.create(
                int.parse(entries[2]),
                new iprange
                {
                    start = long.parse(entries[0]),
                    end = long.parse(entries[1]),
                });
        }
    }
}

public static ienumerable<tuple<int, location>> readlocations(string dir)
{
    using (var file = file.openread(path.combine(dir, "geolitecity-location.csv")))
    using (var reader = new streamreader(file))
    {
        reader.readline(); // copy right
        reader.readline(); // header

        string line;
        while ((line = reader.readline()) != null)
        {
            var entries = line.split(',').select(x => x.trim('"')).toarray();
            yield return tuple.create(
                int.parse(entries[0]),
                new location
                {
                    country = nullifempty(entries[1]),
                    region = nullifempty(entries[2]),
                    city = nullifempty(entries[3]),
                    postalcode = nullifempty(entries[4]),
                    latitude = double.parse(entries[5]),
                    longitude = double.parse(entries[6]),
                    metrocode = nullifempty(entries[7]),
                    areacode = nullifempty(entries[8])
                });
        }
    }
}

private static string nullifempty(string s)
{
    return string.isnullorwhitespace(s) ? null : s;
}

and then it was a matter of bringing it all together:

    var blocks = from blocktuple in readblocks(dir)
                 group blocktuple by blocktuple.item1
                 into g
                 select new
                 {
                     locid = g.key,
                     ranges = g.select(x => x.item2).toarray()
                 };

    var results =
        from loctuple in readlocations(dir)
        join block in blocks on loctuple.item1 equals block.locid into joined
        from joinedblock in joined.defaultifempty()
        let _ = loctuple.item2.ranges = (joinedblock == null ? new iprange[0] : joinedblock.ranges)
        select loctuple.item2;

the advantage of doing things this way is that we only have to write to ravendb once, because we merged the results in memory. that is why i said that those are big results, but still small enough for us to be able to process them easily in memory.

finally, we wrote them to ravendb in batches of 1024 items.

the entire process took about 3 minutes and wrote 353,224 documents to ravendb, which include all of the 1.87 million ip blocks in a format that is easy to search through.

in our next post, we will discuss actually doing searches on this information.

Data (computing)

Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Ultra-Fast Microservices: When Microstream Meets Payara
  • Top Soft Skills to Identify a Great Software Engineer
  • Evolving Domain-Specific Languages
  • Synchronization Methods for Many-To-Many Associations

Comments

Database Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo