RANDOM ACCESS FILES On every major operating system, files work about the same. A lot of stuff you may believe to be true about files are not necessarily actually true. However, your programming language may add certain things in their language that may lead you to believe that file systems are more complicated than they actually are. Java distinguishes between RandomAccessFiles and sequential access files...but your operating does not, and your disk drive does not. A file is a finite stream of bytes (byte: an integer between 0 and 255). You have file position, which is allowed to be from 0 to file length - 1. The file position is where the next read or write is going to happen. Normally, when you read or write, the file position is moved to just past your read or just past your write so that the next time you read or write, you read the correct stuff! If you attempt to read more bytes than exist between the file pointer and the end of the file, this is not an error. You simply don't get the number of bytes you ask for. If the number is read is zero, again, this is not an error, but most programming languages interpret this as "end of file." Some people speak of "end of file" as though it is a special character or a special state, but it is not. "End of file" is just a figure of speech. It means that when you attempted to read a file, zero bytes were actually read. When you're writing, you can overwrite characters already present in the file. You can add new characters to the end of the file, making the file longer than it was. You can also truncate a file, losing characters at the end, and making the file shorter. You CAN'T: insert a byte between two bytes in a file. remove a byte from the interior of a file. Now the file pointer can be manually reset. You can change the pointer to anything you want without actually reading or writing. When the pointer, the next read or write will occur at the new location. If you the set the file pointer larger than the actual length of the file, that's usually not an error: a read at the point will read zero bytes (indicating an EOF), and a write will write a new byte that location, and nulls (zeroes) will be used to fill in the gaps. If you are using a file in which random access is completely inappropriate. Tape drive. Keyboard. In these situations, moving the file pointer around doesn't cause an error, but it doesn't do anything either. I have 1000000 strings of 50 random capital letters. Each string is followed by a newline. On Windows, newline is two bytes long. On Windows, a newline is byte 13 followed by byte 10. (In hex, that's DA. In octal, \15\12, in string form it's "\r\n".) So, in reality, every record is 52 bytes long. 50 bytes for the stuff and bytes for the newline. If we want to read the third record from the file, the file location would be 3*52, or 156. Binary Search is the fastest search algorithm there is. The downside is the file has to be sorted. And a sorted flat file is difficult to maintain. Adding and deleting records are slow processes. This is why binary search isn't used as much in practice despite it being very fast. B-Trees, red-black trees, AVL Trees, these have searches almost as fast as binary search...but you can maintain them much more easily. Hash Tables: very fast in practice. However, if you are unlucky or unwise they can be slow. Hash tables, in modern times, can be stored as an array of linked list. Which means they are usually done in a language that allows pointers. In older languages hash tables were stored in arrays called linear probing tables or quadratic probing tables.