Content Is Still King: How We Use Neo4j at Packt Publishing
This is how one publisher is keeping the content flowing courtesy of Neo4j.
Join the DZone community and get the full member experience.Join For Free
"content is king," bill gates famously said 20 years ago, making his prediction that soon, "anyone with a pc and a modem can publish whatever content they can create."
so how do you provide relevance to visitors?
having an array of similar pieces of content present somewhere on the page provides real value in terms of boosting engagement. this is essentially a recommendation based on one piece of content.
the more information you have about a user, the better the recommendation will be. if you have no data for a particular user (a cold start) you can bolster the list with new and popular results which are more likely to pique the interest of the 'average' visitor.
centralised 'hubs' where you organise your content in a categorical and/or hierarchical manner provide the user with lots of content with a high degree of relevance. this approach originated in printed newspapers but has since become a widely used tool both for publishers and content aggregation sites.
great examples of this can be seen on sites like reddit (where the categorization is done manually by the users) or netflix ( plenty has been written on the subject of their metadata generation process!).
at packt publishing , we have decided to pursue both of these avenues. we aim to make a more compelling experience for users discovering us for the first time and to boost engagement and retention amongst our existing user base.
it's not just products either: we have a vast range of freely available technical blogs, editorial pieces, tutorials and extracts to categorise too. six months ago, we began to recognise the benefit of the above forms of content linkage. the benefits to user engagement and satisfaction were obvious, but the way to achieve that goal was not.
the rest of this post is about how we approached this problem, which has manifested as our tech page hub .
implementing categorization and recommendation of content rely on having accurate metadata attached to all of your content. if you don't have a robust strategy for generating metadata at the point of publication, manually tagging all your content can be a resource-intensive process and is prone to human error.
this is the position we found ourselves in when we embarked on this journey. our category-level metadata was often too broad to really provide relevance to our users, and the keyword-level metadata was very narrow and often incorrect or simply unusable.
it was mooted that to do this effectively, and at scale, we would need an automated solution.
put simply, any automated solution to this problem would start with some corpus of terms that represent with relations between closely connected topics. then each piece of content would be scanned for mentions of those topics and tagged appropriately. once this is done, you simply decide on the topics which will form your categories and decide on the hierarchy you will use to present the content within those topics.
graph databases are a natural way to think about this problem, for many reasons. with a graph structure, you can represent arbitrary axes of information as nodes, and multifaceted relationships between those axes as edges.
analyzing connections between nodes becomes a trivial matter of defining traversals, and you are never constrained to think along only one dimension. also, the extensible nature of a graph database schema means that you can add new dimensions of information in a very efficient way. you can prototype and test new parts of your schema rapidly and analyse the impact easily.
we opted to use neo4j for this project for several reasons, two of the most important being:
- query language: cypher is an intuitive and expressive way to explore graph data
- performance: to this day i am regularly astonished at the speed with which we can execute even very complex queries, and also the scalability of those queries to huge datasets
the first thing we need is a domain-specific corpus of topics, with some notion of relation between the topics. fortunately for us, an extensive, well-moderated and widely used corpus already exists: stackoverflow.com tags.
stack overflow is a q&a site for developers looking for answers to all manner of software problems. questions on the site are tagged with one or more tags defining what subjects the problem covers.
this allows fantastic search, both for potential questioners and the community of experts looking to share their expertise. the tags are controlled by the community and moderated for consistency and usefulness.
there also exists a natural way to connect those topics. when two tags are tagged on the same question, that co-occurrence tells us those two topics are in some way related to each other (e.g., for the 'neo4j' tag, the most commonly co-occurring tags are 'cypher', 'java', and 'graph-databases'). these relations are aggregated and available through the stack exchange api , making it a trivial matter to generate the entire network of topics in graph form.
we can make even more inferences about the domain by looking at the size of the nodes and edges. if the co-occurrence of two tags makes up 90% of the questions of tag a, but only 10% of tag b, we can infer directionality and start to build hierarchies and communities.
putting the pieces together
we used the api to get all the stack overflow tags into our graph as nodes, and their co-occurrences as edges:
a small extract of our stack overflow graph
the next step was to represent all of our content in the graph as nodes and represent all mentions of stack overflow tags as edges. i tried numerous different packages and solutions for this, and for now, i've settled on good old regular expressions in python. stack overflow also provides lists of moderated synonyms for some tags, allowing us to capture even more information.
for a first pass, we used only the immediately available copy. for products, this was the copy on the website (already a keyword-dense summary), and for articles, this meant the whole content of the article. initially, we quantify the relationships by putting the raw term frequency onto the edge.
just like that, all our content has domain-specific tags attached. we can immediately start doing traversals to look at how things are connected:
a subgraph of the local network surrounding our popular learning neo4j book
from what we have so far, we can immediately start making product recommendations and looking at how our content could be categorised. however, there are still some pieces missing.
firstly, it would make no sense to generate category pages for all of the tags in our graph. the tags provide the basis for our categorisation, but we still need a cleaner, more hierarchical network in order to define specific areas of interest. for this, we have implemented an ontology of
nodes, which sit alongside the network of stack overflow tags. this allows us to extend the ontology by classifying the latent connections in the stack overflow network.
a demonstration of the ontology sitting alongside the stack overflow network
this provides us with the high-level categories we desire, and from there, it's a matter of defining the logic which connects content to a topic.
secondly, tag frequencies are famously bad metrics of information. we need to move from
edges. for this we turn to metrics such as
and topic size, as well as graph theoretic measures such as centrality. this gives us a much more fine-grained picture of what our content is about.
finally, in recommendation terms, we have content filtering but we are yet to add collaborative filtering. we are still basing our relationships on what stack overflow views as connected, not usage patterns from our actual customers.
once we have all these pieces in place, we're able to generate our category pages.
our graph sits behind a web service, which gets called whenever a tech page is rendered. the web service calls a particular set of cypher queries and returns a list of content ids, in various blocks, as defined by the design of the page. this approach gives us flexibility in a number of key areas.
the graph replicates information from our cms, so the queries can fully define what information to display and in what order. this means all the heavy lifting of dynamic recommendations is done by the graph, not by the cms.
the construction of the web service is modular, allowing us to change the cypher queries' underlying blocks on the page easily, to test new ideas or weightings.
editors also have control over 'featured' content, so we can give prominence to certain pieces. the beauty of this is that, for example, if we decide to feature an article about machine learning in python, this will automatically be featured on the machine learning page, the python page and the data science page.
that's the central point of this post: using graphs has allowed us to make everything dynamic right from the start. if we so wished, we could make a category page for each and every stack overflow tag.
most of them would be empty, and some of them would be far too full, but the website wouldn't even stutter. the scalability of tech pages is only limited by the scale of our content, not by the technology we implement; this is a great place to be.
this post has described just one aspect of how we use graphs at packt. tech pages are a great demonstration of how the flexibility of a graph allows for very dynamic categorization, but there's a lot more to this story, and a lot of questions yet to answer.
the steps i've discussed above have been a catalyst for many more interesting ideas, and neo4j has allowed us to develop really interesting solutions for our customers.
Published at DZone with permission of Greg Roberts. See the original article here.
Opinions expressed by DZone contributors are their own.