DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Elevate your data management. Join a lively chat to learn how streaming data can improve real-time decision-making, and how to reduce costs.

Platform Engineering: Enhance the developer experience, establish secure environments, automate self-service tools, and streamline workflows

Build Cloud resilience. High-profiled cloud failures have shown us one thing – traditional approaches aren't enough. Join the discussion.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Avatar

Dana Groce

Demand Generation Specialist at MongoDB

New York, US

Joined Dec 2014

About

MongoDB makes development simple and beautiful. Fortune 500 enterprises, startups, hospitals, governments and organizations of all kinds use MongoDB because it is the best database for modern applications. MongoDB gives tens of thousands of organizations the tools to be agile and the freedom to innovate. Through simplicity, MongoDB changes what it means to build and run applications. Through openness, MongoDB elevates what it means to work with a software company. Please visit www.MongoDB.com to learn more.

Stats

Reputation: 100
Pageviews: 87.2K
Articles: 3
Comments: 0
  • Articles

Articles

article thumbnail
Building an App with MongoDB: Creating a REST API Using the MEAN Stack Part 2
Written by Norberto Leite In the first part of this blog series, we covered the basic mechanics of our application and undertook some data modeling. In this second part, we will create tests that validate the behavior of our application and then describe how to set-up and run the application. Write the tests first Let’s begin by defining some small configuration libraries. file name: test/config/test_config.js Our server will be running on port 8000 on localhost. This will be fine for initial testing purposes. Later, if we change the location or port number for a production system, it would be very easy to just edit this file. To prepare for our test cases, we need to ensure that we have a good test environment. The following code achieves this for us. First, we connect to the database. file name: test/setup_tests.js Next, we drop the user collection. This ensures that our database is in a known starting state. Next, we will drop the user feed entry collection. Next, we will connect to Stormpath and delete all the users in our test application. Next, we close the database. Finally, we call async.series to ensure that all the functions run in the correct order. Frisby was briefly mentioned earlier. We will use this to define our test cases, as follows. file name: test/create_accounts_error_spec.js We will start with the enroll route in the following code. In this case we are deliberately missing the first name field, so we expect a status reply of 400 with a JSON error that we forgot to define the first name. Let’s “toss that frisby”: In the following example, we are testing a password that does not have any lower-case letters. This would actually result in an error being returned by Stormpath, and we would expect a status reply of 400. In the following example, we are testing an invalid email address. So, we can see that there is no @ sign and no domain name in the email address we are passing, and we would expect a status reply of 400. Now, let’s look at some examples of test cases that should work. Let’s start by defining 3 users. file name: test/create_accounts_spec.js In the following example, we are sending the array of the 3 users we defined above and are expecting a success status of 201. The JSON document returned would show the user object created, so we can verify that what was created matched our test data. Next, we will test for a duplicate user. In the following example, we will try to create a user where the email address already exists. One important issue is that we don’t know what API key will be returned by Stormpath a priori. So, we need to create a file dynamically that looks like the following. We can then use this file to define test cases that require us to authenticate a user. file name: /tmp/readerTestCreds.js In order to create the temporary file above, we need to connect to MongoDB and retrieve user information. This is achieved by the following code. file name: tests/writeCreds.js In the following code, we can see that the first line uses the temporary file that we created with the user information. We have also defined several feeds, such as Dilbert and the Eater Blog. file name: tests/feed_spec.js Previously, we defined some users but none of them had subscribed to any feeds. In the following code we test feed subscription. Note that authentication is required now and this is achieved using .auth with the Stormpath API keys. Our first test is to check for an empty feed list. In our next test case, we will subscribe our first test user to the Dilbert feed. In our next test case, we will try to subscribe our first test user to a feed that they are already subscribed-to. Next, we will subscribe our test user to a new feed. The result returned should confirm that the user is subscribed now to 2 feeds. Next, we will use our second test user to subscribe to a feed. The REST API Before we begin writing our REST API code, we need to define some utility libraries. First, we need to define how our application will connect to the database. Putting this information into a file gives us the flexibility to add different database URLs for development or production systems. file name: config/db.js If we wanted to turn on database authentication we could put that information in a file, as shown below. This file should not be checked into source code control for obvious reasons. file name: config/security.js We can keep Stormpath API and Secret keys in a properties file, as follows, and need to carefully manage this file as well. file name: config/stormpath_apikey.properties Express.js overview In Express.js, we create an “application” (app). This application listens on a particular port for HTTP requests to come in. When requests come in, they pass through a middleware chain. Each link in the middleware chain is given a req (request) object and a res (results) object to store the results. Each link can choose to do work, or pass it to the next link. We add new middleware via app.use(). The main middleware is called our “router”, which looks at the URL and routes each different URL/verb combination to a specific handler function. Creating our application Now we can finally see our application code, which is quite small since we can embed handlers for various routes into separate files. file name: server.js We define our own middleware at the end of the chain to handle bad URLs. Now our server application is listening on port 8000. Let’s print a message on the console to the user. Defining our Mongoose data models We use Mongoose to map objects on the Node.js side to documents inside MongoDB. Recall that earlier, we defined 4 collections: Feed collection. Feed entry collection. User collection. User feed-entry-mapping collection. So we will now define schemas for these 4 collections. Let’s begin with the user schema. Notice that we can also format the data, such as converting strings to lowercase, and remove leading or trailing whitespace using trim. file name: app/routes.js In the following code, we can also tell Mongoose what indexes need to exist. Mongoose will also ensure that these indexes are created if they do not already exist in our MongoDB database. The unique constraint ensures that duplicates are not allowed. The “email : 1” maintains email addresses in ascending order. If we used “email : -1” it would be in descending order. We repeat the process for the other 3 collections. The following is an example of a compound index on 4 fields. Each index is maintained in ascending order. Every route that comes in for GET, POST, PUT and DELETE needs to have the correct content type, which is application/json. Then the next link in the chain is called. Now we need to define handlers for each combination of URL/verb. The link to the complete code is available in the resources section and we just show a few examples below. Note the ease with which we can use Stormpath. Furthermore, notice that we have defined /api/v1.0, so the client would actually call /api/v1.0/user/enroll, for example. In the future, if we changed the API, say to 2.0, we could use /api/v2.0. This would have its own router and code, so clients using the v1.0 API would still continue to work. Starting the server and running tests Finally, here is a summary of the steps we need to follow to start the server and run the tests. Ensure that the MongoDB instance is running mongod Install the Node libraries npm install Start the REST API server node server.js Run test cases node setup_tests.js jasmine-node create_accounts_error_spec.js jasmine-node create_accounts_spec.js node write_creds.js jasmine-node feed_spec.js MongoDB University provides excellent free training. There is a course specifically aimed at Node.js developers and the link can be found in the resources section below. The resources section also contains links to good MongoDB data modeling resources. Resources HTTP status code definitions Chad Tindel’s Github Repository M101JS: MongoDB for Node.js Developers Data Models Data Modeling Considerations for MongoDB Applications
June 29, 2015
· 1,959 Views
article thumbnail
Building an App with MongoDB: Creating a REST API Using the MEAN Stack Part 1
Written by Norberto Leite Introduction In this 2-part blog series, you will learn how to use MongoDB, Mongoose Object Data Mapping (ODM) with Express.js and Node.js. These technologies use a uniform language - JavaScript - providing performance gains in the software and productivity gains for developers. In this first part, we will describe the basic mechanics of our application and undertake data modeling. In the second part, we will create tests that validate the behavior of our application and then describe how to set-up and run the application. No prior experience with these technologies is assumed and developers of all skill levels should benefit from this blog series. So, if you have no previous experience using MongoDB, JavaScript or building a REST API, don’t worry - we will cover these topics with enough detail to get you past the simplistic examples one tends to find online, including authentication, structuring code in multiple files, and writing test cases. Let’s begin by defining the MEAN stack. What is the MEAN stack? The MEAN stack can be summarized as follows: M = MongoDB/Mongoose.js: the popular database, and an elegant ODM for node.js. E = Express.js: a lightweight web application framework. A = Angular.js: a robust framework for creating HTML5 and JavaScript-rich web applications. N = Node.js: a server-side JavaScript interpreter. The MEAN stack is a modern replacement for the LAMP (Linux, Apache, MySQL, PHP/Python) stack that became the popular way for building web applications in the late 1990s. In our application, we won’t be using Angular.js, as we are not building an HTML user interface. Instead, we are building a REST API which has no user interface, but could instead serve as the basis for any kind of interface, such as a website, an Android application, or an iOS application. You might say we are building our REST API on the ME(a)N stack, but we have no idea how to pronounce that! What is a REST API? REST stands for Representational State Transfer. It is a lighter weight alternative to SOAP and WSDL XML-based API protocols. REST uses a client-server model, where the server is an HTTP server and the client sends HTTP verbs (GET, POST, PUT, DELETE), along with a URL and variable parameters that are URL-encoded. The URL describes the object to act upon and the server replies with a result code and valid JavaScript Object Notation (JSON). Because the server replies with JSON, it makes the MEAN stack particularly well suited for our application, as all the components are in JavaScript and MongoDB interacts well with JSON. We will see some JSON examples later, when we start defining our Data Models. The CRUD acronym is often used to describe database operations. CRUD stands for CREATE, READ, UPDATE, and DELETE. These database operations map very nicely to the HTTP verbs, as follows: POST: A client wants to insert or create an object. GET: A client wants to read an object. PUT: A client wants to update an object. DELETE: A client wants to delete an object. These operations will become clear later when define our API. Some of the common HTTP result codes that are often used inside REST APIs are as follows: 200 - “OK”. 201 - “Created” (Used with POST). 400 - “Bad Request” (Perhaps missing required parameters). 401 - “Unauthorized” (Missing authentication parameters). 403 - “Forbidden” (You were authenticated but lacking required privileges). 404 - “Not Found”. A complete description can be found in the RFC document, listed in the resources section at the end of this blog. We will use these result codes in our application and you will see some examples shortly. Why Are We Starting with a REST API? Developing a REST API enables us to create a foundation upon which we can build all other applications. As previously mentioned, these applications may be web-based or designed for specific platforms, such as Android or iOS. Today, there are also many companies that are building applications that do not use an HTTP or web interface, such as Uber, WhatsApp, Postmates, and Wash.io. A REST API also makes it easy to implement other interfaces or applications over time, turning the initial project from a single application into a powerful platform. Creating our REST API The application that we will be building will be an RSS Aggregator, similar to Google Reader. Our application will have two main components: The REST API Feed Grabber (similar to Google Reader) In this blog series we will focus on building the REST API, and we will not cover the intricacies of RSS feeds. However, code for Feed Grabber is available in a github repository, listed in the resources section of this blog. Let’s now describe the process we will follow in building our API. We will begin by defining the data model for the following requirements: Store user information in user accounts Track RSS feeds that need to be monitored Pull feed entries into the database Track user feed subscriptions Track which feed entry a user has already read Users will need to be able to do the following: Create an account Subscribe/unsubscribe to feeds Read feed entries Mark feeds/entries as read or unread Modeling Our Data An in-depth discussion on data modeling in MongoDB is beyond the scope of this article, so see the references section for good resources on this topic. We will need 4 collections to manage this information: Feed collection Feed entry collection User collection User-feed-entry mapping collection Let’s take a closer look at each of these collections. Feed Collection Lets now look at some code. To model a feed collection, we can use the following JSON document: If you are familiar with relational database technology, then you will know about databases, tables, rows and columns. In MongoDB, there is a mapping to most of these Relational concepts. At the highest level, a MongoDB deployment supports one or more databases. A database contains one or more collections, which are the similar to tables in a relational database. Collections hold documents. Each document in a collection is, at a highest level, similar to a row in a relational table. However, documents do not follow a fixed schema with pre-defined columns of simple values. Instead, each document consists of one or more key-value pairs where the value can be simple (e.g., a date), or more sophisticated (e.g., an array of address objects). Our JSON document above is an example of one RSS feed for the Eater Blog, which tracks information about restaurants in New York City. We can see that there are a number of different fields but the key ones that our client application may be interested in include the URL of the feed and the feed description. The description is important so that if we create a mobile application, it would show a nice summary of the feed. The remaining fields in our JSON document are for internal use. A very important field is _id. In MongoDB, every document must have a field called _id. If you create a document without this field, at the point where you save the document, MongoDB will create it for you. In MongoDB, this field is a primary key and MongoDB will guarantee that within a collection, this value is unique. Feed Entry Collection After feeds, we want to track feed entries. Here is an example of a document in the feed entry collection: Again, we can see that there is a _id field. There are also some other fields, such as description, title and summary. For the content field, note that we are using an array, and the array is also storing a document. MongoDB allows us to store sub-documents in this way and this can be very useful in some situations, where we want to hold all information together. The entryID field uses the tag format to avoid duplicate feed entries. Notice also the feedID field that is of type ObjectId - the value is the _id of the Eater Blog document, described earlier. This provides a referential model, similar to a foreign key in a relational database. So, if we were interested to see the feed document associated with this ObjectId, we could take the value 523b1153a2aa6a3233a913f8 and query the feed collection on _id, and it would return the Eater Blog document. User Collection Here is the document we could use to keep track of users: A user has an email address, first name and last name. There is also an sp_api_key_id and sp_api_key_secret - we will use these later with Stormpath, a user management API. The last field, called subs, is a subscription array. The subs field tells us which feeds this user is subscribed-to. User-Feed-Entry Mapping Collection The last collection allows us to map users to feeds and to track which feeds have been read. We use a Boolean (true/false) to mark the feed as read or unread. Functional Requirements for the REST API As previously mentioned, users need to be able to do the following: Create an account. Subscribe/unsubscribe to feeds. Read feed entries. Mark feeds/entries as read or unread. Additionally, a user should be able to reset their password. The following table shows how these operations can be mapped to HTTP routes and verbs. Route Verb Description Variables /user/enroll POST Register a new user firstName lastName email password /user/resetPassword PUT Password Reset email /feeds GET Get feed subscriptions for each user with description and unread count /feeds/subscribe PUT Subscribe to a new feed feedURL /feeds/entries GET Get all entries for feeds the user is subscribed to /feeds/&ltfeedid>/entries GET Get all entries for a specific feed /feeds/&ltfeedid> PUT Mark all entries for a specific feed as read or unread read = &lttrue | false> /feeds/&ltfeedid>/entries/&ltentryid> PUT Mark a specific entry as either read or unread read = &lttrue | false> /feeds/&ltfeedid> DELETE Unsubscribe from this particular feed In a production environment, the use of secure HTTP (HTTPS) would be the standard approach when sending sensitive details, such as passwords. Real World Authentication with Stormpath In robust real-world applications it is important to provide user authentication. We need a secure approach to manage users, passwords, and password resets. There are a number of ways we could authenticate users for our application. One possibility is to use Node.js with the Passport Plugin, which could be useful if we wanted to authenticate with social media accounts, such as Facebook or Twitter. However, another possibility is to use Stormpath. Stormpath provides User Management as a Service and supports authentication and authorization through API keys. Basically, Stormpath maintains a database of user details and passwords and a client application REST API would call the Stormpath REST API to perform user authentication. The following diagram shows the flow of requests and responses using Stormpath. In detail, Stormpath will provide a secret key for each “Application” that is defined with their service. For example, we could define an application as “Reader Production” or “Reader Test”. This could be very useful when we are still developing and testing our application, as we may be frequently adding and deleting test users. Stormpath will also provide an API Key Properties file. Stormpath also allows us to define password strength requirements for each application, such as: Must have >= 8 characters. Must include lowercase and uppercase. Must include a number. Must include a non-alphabetic character Stormpath keeps track of all of our users and assigns them API keys, which we can use for our REST API authentication. This greatly simplifies the task of building our application, as we don’t have to focus on writing code for authenticating users. Node.js Node.js is a runtime environment for server-side and network applications. Node.js uses JavaScript and it is available for many different platforms, such as Linux, Microsoft Windows and Apple OS X. Node.js applications are built using many library modules and there is a very rich ecosystem of libraries available, some of which we will use to build our application. To start using Node.js, we need to define a package.json file describing our application and all of its library dependencies. The Node.js Package Manager installs copies of the libraries in a subdirectory, called node_modules/, in the application directory. This has benefits, as it isolates the library versions for each application and so avoids code compatibility problems if the libraries were to be installed in a standard system location, such as /usr/lib, for example. The command npm install will create the node_modules/ directory, with all of the required libraries. Here is the JavaScript from our package.json file: Our application is called reader-api. The main file is called server.js. Then we have a list of the dependent libraries and their versions. Some of these libraries are designed for parsing the HTTP queries. The test harness we will use is called frisby. The jasmine-node is used to run frisby scripts. One library that is particularly important is async. If you have never used node.js, it is important to understand that node.js is designed to be asynchronous. So, any function which does blocking input/output (I/O), such as reading from a socket or querying a database, will take a callback function as the last parameter, and then continue with the control flow, only returning to that callback function once the blocking operation has completed. Let’s look at the following simple example to demonstrate this. In the above example, we may think that the output would be: one two but in fact it might be: two one because the line that prints “one” might happen later, asynchronously, in the callback. We say “might” because if conditions are just right, “one” might print before “two”. This element of uncertainty in asynchronous programming is called non-deterministic execution. For many programming tasks, this is actually desirable and permits high performance, but clearly there are times when we want to execute functions in a particular order. The following example shows how we could use the async library to achieve the desired result of printing the numbers in the correct order: In the above code, we are guaranteed that function two will only be called after function one has completed. Wrapping Up Part 1 Now that we have seen the basic mechanics of node.js and async function setup, we are ready to move on. Rather than move into creating the application, we will instead start by creating tests that validate the behavior of the application. This approach is called test-driven development and has two very good features: It helps the developer really understand how data and functions are consumed and often exposes subtle needs like the ability to return 2 or more things in an array instead of just one thing. By writing tests before building the application, the paradigm becomes “broken / unimplemented until proven tested OK” instead of “assumed to be working until a test fails.” The former is a “safer” way to keep the code healthy.
June 28, 2015
· 10,577 Views · 2 Likes
article thumbnail
[VIDEO] How to Create a Pub/Sub Application with MongoDB
Many applications need to be alerted when new documents are inserted or modified in the database. These include real-time apps, or features such as notifications, messaging, or chat. It is possible to build this kind of application with MongoDB, using features that are native to the database: Capped Collections and Tailable Cursors! Details and code samples that illustrate this technique are provided in How to Create a Pub/Sub application with MongoDB. There’s also a screencast that shows the application built in the blog post:
June 27, 2015
· 999 Views

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: