The HDP Certified Developer (HDPCD) exam is tough, you need to know Pig, Hive, Sqoop, and Flume.
The first step in preparing is to walk through all of the steps outlined in the exam guide. Basically read the Apache documentation for each project, follow the tutorials at the Hortonworks site and do the hands-on from GitHub. You must be able to do the hands-on work!!!! The test is hands-on and not multiple choice. You need to know the commands and get them running. I suggest setting up the Amazon cloud using the exam directions AND downloading a Sandbox so that you can try out all the Pig, Flume, Hive, and Sqoop commands and queries. You need to run some from Ambari and some from the command line. You must be comfortable and know the syntax for both. The exam is strictly timed, and you do not have access to Google.
Now for the important tips:
Search the Hortonworks Community, you will find answers, tutorials, and helpful hints on all the above mentioned Hadoop tools. Also you can post questions there and they will quickly be answered by experienced Hadoop people and possible actual project committers.
Read the presentations from Hadoop Summit and Hortonworks. It's a good way to get background and see some reasons behind the tools. Another Hadoop Summit is coming this month, so there will be more material soon.
Watch Training Videos on Youtube. Hortonworks and others have a ton of other people have put hundreds of hours of training out there.
If you are a Hortonworks customer, you may have access to the Self-Paced Learning Library which consists of online classes with videos, presentations, and click-through learning.
Come to a meetup, there are tons of Hadoop meetups around that provide hands-on and live presentations on various Hadoop tools. You can also ask questions to certified developers, experts, and other people learning.
Look at this excellent set of Hadoop Tutorials are CoreServlets.com
Then more hands-on, grab some data like Twitter feeds, logs, stock data, and stuff from Kaggle and parse it with Pig, query it with Hive and load it with Flume and Sqoop.
Some Additional Resources: