Last revised 2-18-98.
Currently, each instruction that accesses memory goes through the MMU to map virtual addresses to real addresses. This makes the design of the MMU somewhat difficult, particularly in superscalar designs. Also, an MMU can easily take up ~10% of the chip area. There is an alternative approach.
It comes from the past. Whereas today a 16k byte memory is a moderate sized level one cache, back in 1982, it was a sizable PC main memory. Back then, the processor used real addresses to access the 16k byte main memory. Programs ran in the 16k byte memory. Code and/or data, when needed, was accessed from disc(s). Typically, a file name would be passed to DOS, DOS would look up where the code or data was on disc for reads, or decide where to write it. This is memory mapping, 16k byte main memory to the disc memory system. The mapping was so infrequent that it could reasonably be done in software.
To do a somewhat similar approach today, programs would run using 16k byte cache addresses, and memory map to lower levels of memory when a code/data transfer was needed. Again, the mapping should be infrequent enough such that a mostly software mapping can be done with the memory map residing in a cache subset.
Cray used relocation registers to provide flexibility in where a program executed from within memory. They may be useful within a 16k cache as well.
Virtual memory is sometimes very useful, enabling trading off computer time for reduced programming time. Generally, only a small number of programs need virtual memory, and when they do, by placing the virtual to real translation lower in the memory hierarchy, most of the translation can be done in software. Also, usually only one or a few very large data sets need virtual memory. The rest of the program could use real memory.
Emulation of large legacy programs is an exception to the above.