Architect1776 wrote:
How fast do cards really wear out from read/write cycles.
I have 20 year old cards that work as good today as they did 20 years ago and they get written to and erased all the time. One card failed about 17 years ago but assumed that it was defective as the other 5 cards are still going strong.
So again what is the life of a card writing and erasing?
The cards from 20 years ago had much larger transistors and gates and were more hardy. They had much larger max figures for write/erase cycles. Perhaps 10x longer.
But those older cards had much lower capacities too.
As for your question, NAND Flash is structured in Blocks. And within a Block is some number of pages. Erase and write is done one block at a time. If you have older data living in another Block, that Block is not refreshed when you write new data to a different Block.
During erase, the whole Block is erased as one operation. And then data is written, one page at a time. Actually, there is RAM memory for the page that is first loaded, and then the whole page is written in parallel. And then on to the next page.
By making pages longer, the write speed of the memory can be made faster. The old memory cards used a page size of 512 bytes (called Core Memory) and some additional space (called Spare Memory - which holds additional information needed). Newer memory has increased that page size (a lot) and has also come up with multiple pages where data is distributed between core arrays, where multiple pages can be written in memory in parallel. Plus, the individual cells have gotten so small that errors are common so the spare area has also grown to hold info needed to do Error Correction within a page. Suppose you have a page size of 16K bytes, and the spare area hold in excess of 2K bytes. The extra data in the spare area is then used to detect and correct a certain number of bad bits within the Core Memory. The details for this are very guarded and vary from supplier to supplier and some can correct more bad bits than others.
As the block ages, certain bits are going to start to fail. And as they fail, Error Correction will mathematically detect and correct a certain number of bad bits on that page, and you the user will never be the wiser that you actually have bad bits that were fixed on-the-fly during a read operation. The manufacturer is likely never going to tell you how many bad bits it can correct and how many it has used up on any given page.
Before I retired, I was involved in writing test programs for these memory devices, and during test, we could determine this information at wafer level. But after the die are built into finished packages, the internal memory manager has no access to this information for the end user.