Gilad Bracha raises a fantastic point in his latest blog entry, that Java would be efficient as it is now without having primitives. In fact, Gilad says that Java's original sin is that it's not a proper object oriented language.
As one example, consider type char. When Java was introduced, the Unicode standard required 16 bits. This later changed, as 16 bits were inadequate to describe the world’s characters.
In the meantime, Java had committed to a 16 bit character type. Now, if characters were objects, their representation would be encapsulated, and nobody would very much affected how many bits are needed. A primitive type like char, however, advertises its representation to the world. Consequently, people dealing with unicode in Java have to deal with encoding code points themselves.
The blog entry proves that, even when considering potential performance issues with arrays, there wouldn't have been a significant performance hit to Java had it been fully object oriented.
While this is something that Gilad has discussed at length, it's something that I had never thought about before. I just accepted primitives as part of the language. One commenter suggested that using a source keyword in Java 7 and being able to have this change available - although that does seem unlikely.
I also discussed this with Kirk Knoernschild, who isn't a fan of primitives either.
We had a method that updated a db2 database and essentially encapsulated a SQL statement. The arguments passed into the method were the values inserted/updated/deleted. Because the method took primitives, there was no way to pass in null. So even for columns that were nullable on the database, developers were inserting zero. And there is a big difference between zero and null. The two cannot be treated the same in many cases.
Is this something that others in the community consider to be an issue?