Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

On EclipseCon Idol...

DZone's Guide to

On EclipseCon Idol...

· Java Zone
Free Resource

Build vs Buy a Data Quality Solution: Which is Best for You? Gain insights on a hybrid approach. Download white paper now!

The submission deadline for EclipseCon 2009 has come and gone, and now the program committee has the unpleasant duty of selecting from among the many excellent submissions. But, like all good reality shows, you, the "viewer", get to provide input on what you think should be selected. Don't let your favorite submissions "get sent home"! Providing feedback will not only make the program committee's job easier, but will help ensure that the talks YOU want to see make it onto the program.

I've submitted several proposals this year (see below), and I'd like your feedback. Please follow the link to the EclipseCon submission system to post any comments you may have.



Representational State Transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web. However, it is possible to design any enterprise software system in accordance with the REST architectural style without using the HTTP protocol and without interacting with the World Wide Web. Systems that follow the principles of REST often referred to as RESTful. Proponents of REST argue that the Web enjoyed the scalability and growth that it has had as a direct result of a few key design principles. Among these principles are the notions that application state and functionality are divided into resources and that every resource is uniquely addressable using a universal syntax for use in hypermedia links. Another key principle of REST is that all resources share a uniform interface for the transfer of state between client and resource, consisting of a constrained set of content types and a constrained set of well-defined operations.

The Eclipse Modeling Framework (EMF) provides a Java runtime framework and tools for generative application development and fine-grained data integration based on simple models. Models can be specified directly using EMF's metamodel, Ecore, or imported from other forms, including UML and XML Schema. Given a model specification, EMF can generate a corresponding set of Java interfaces and implementation classes that can easily be mixed with hand-written code for maximum flexibility. When deployed, applications developed with EMF benefit from a powerful and extensible runtime, which, among other features, includes a persistence mechanism which has always supported the principles of REST – perhaps even before the term "REST" became popular.

This tutorial will provide an introduction to EMF, including alternatives for specifying a model, EMF's code generation tools, and key runtime framework concepts. As a practical usage of this knowledge, the presenters will show how EMF can be used to build RESTful applications, exploring some best practices for working with resources and other features of the framework.



The Model Development Tools (MDT) project focuses on big "M" modeling within the Modeling project; its purpose is twofold: 1) to provide an implementation of industry standard metamodels and 2) to provide exemplary tools for developing models based on those metamodels. Since its launch in September of 2006, MDT has undergone two major releases and is now working towards its third release as part of the 2009 Galileo Simultaneous Release. This short talk will provide an overview of the new features and components/projects in MDT and give an update on the status of its Galileo release.


In order to enable the much needed agility demanded by today’s marketplace, business functions and associated processes must be supported by semantically accurate and reusable information, i.e. data and its associated metadata. A data model is an abstract model that describes how data is represented and accessed. Data modeling is the process of creating a data model instance by applying a data model theory, typically to solve some business enterprise requirement.

Data model instances can be categorized into various levels or perspectives, including contextual data models which identify entity classes, conceptual data models which define the meaning of things in the organization, logical data models which describe the logical representation of properties without regard to particular data manipulation technology, physical data models which describe the physical means by which data are stored, data definitions which represent the coding language of the schema on the specific development platform, and data instantiations which hold the values of the properties applied to the data in a schema.In this short talk, we’ll take a look at examples of each of these kinds of data models and explore how they are supported by projects/components at Eclipse.



There is a long standing rift between software modelers and data modelers. You say class, I say entity; you say property, I say attribute; you say association, I say relationship; etc.. This long talk will take a candid look at the differences (and similarities) between UML and E/R modeling, and offer insight into what is being, or could be, done at Eclipse and elsewhere to reconcile the two. You might be surprised to learn that the divide between software modelers and data modelers isn't as great as you thought.


Last year, two symposia were held jointly by Eclipse and the OMG to focus on the synergies between open source and open specifications and to discuss how the joint future of Eclipse and the OMG can be shaped. By all accounts, both were quite successful. So, now what? Well, a lot of really useful feedback was shared, but not much has changed since then. The bottom line is that the status quo will no longer suffice. Design by committee and vendor politics won't get anyone anywhere. Neither will closedness and opaqueness. This long talk will review the feedback from the first two symposia and will compare and contrast the Eclipse and OMG ecosystems to identify concrete measures that can be taken to improve the state of affairs.

From http://kenn-hussey.blogspot.com

Build vs Buy a Data Quality Solution: Which is Best for You? Maintaining high quality data is essential for operational efficiency, meaningful analytics and good long-term customer relationships. But, when dealing with multiple sources of data, data quality becomes complex, so you need to know when you should build a custom data quality tools effort over canned solutions. Download our whitepaper for more insights into a hybrid approach.

Topics:

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}