[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: SSD as Cache?



On 11/08/2013 09:39 AM, Neal Murphy wrote:
Using the -noatime mount option will extend the lifetime of the SSD.

Assuming the application cache doesn't need atime, noatime should help application performance on any type of drive.


If the application cache is on the same partition as other data, setting noatime might break other things. Putting the application cache on its own partition/ drive would avoid such problems and allow optimum tuning.


Alternatively, relatime might work (and may already be in place; see mount(8)).


RAM is fairly cheap these days. Instead, if you can, increase RAM by 16GiB,
leave the cache on rotating media, and let Linux cache the files in RAM. After
that, performance improvements will come from fixing inefficient code.

It's a matter of maximize the caching equation:

	s = h * f - K

        s = average time saved per access

	h = probability of cache hit

	f = average time to calculate item from arguments and storage

        K = average time to fetch item from cache


Regarding h:

- Choosing what to cache and what not to cache is critical.

- Cache size matters.  Bigger isn't always better.  Why 10 GB?

- Cache control implementation is important, and depends on the above.

- I don't know if the application has tuning parameters for the above.

- The higher h, the more likely caching will help. But, maximum h does not imply maximum s.


Regarding f:

- SSD's should be faster than HDD's.

- RAM drives should be faster than SSD's.

- Choice of file system is important.

- Kernel caches should be faster than any drive/ file system.

- All of the above can be tuned.

- Application memory is the fastest. I don't know if the application offers in-memory caching.

- Everything uses RAM. Bigger is usually better. But, populating multiple slots can be slower than populating one per channel.


Regarding K:

- Similar considerations as f, but the cache should be much smaller than primary storage, allowing faster, but higher cost-per-byte, solutions.

- The smaller the ratio K/f, the more likely caching will help. But, minimum K/f does not imply maximum s.


The key is profiling/ benchmarking the various permutations.


It could be that the OP already has sufficient hardware and software, and tuning alone will do the job. Even if not, doing the exercise will identify the bottlenecks and guide allocation of resources.


David


Reply to: