I am currently writing a tutorial on Java NIO, and I have just updated the text on Buffers. About the design of Buffers, I have question to the Java community, but let me first describe the Buffer mechanism it is about:
When you create a new Buffer, e.g. a ByteBuffer, it is in write mode. The Buffer has a limit saying how many bytes you can write into it, and a position of the next byte to write into.
When you are done writing data into a Buffer, you call flip(). The Buffer is now in read mode.
The limit property now tells how many bytes you can read from the Buffer (how many bytes were written), and the position now tells the next byte to read.
When you are done reading, you call clear() or compact(), which switches the Buffer back into write mode.
Why did the NIO designers choose to have this distinction between "write" and "read" mode, and thus
the meaning of "limit" and "position" ?
As I see it, it causes two problems:
1) A Buffer is always in either "read" or "write" mode. I cannot be in both modes at the same time.
2) When you receive af Buffer as a parameter, how do you know if it's in read or write mode?
In my opinion a much smarter design would have been to simply have a read position and a write position
instead of "position" and "limit". This way you could always both read and write from the Buffer.
Read, if the read position is less than write position.
Write, if write position is less than capacity of Buffer.
Does anyone have any deep insights into the NIO Buffer design?
- Jakob Jenkov