eAccelerator and memcached are currently two of the more mainstream caching acceleration tools that can be used in PHP.
eAccelerator is specially developed for PHP, while memcached is not only used in PHP, but can be used in all other languages. Use .
The main functions of eAccelerator:
1. Cache the execution code of the PHP file: when the cached code is called again, it will be read directly from the memory Therefore, the speed of PHP running is greatly improved.
2. Provides shared memory operation functions: users can save their common non-resource objects into memory and read them out at any time.
The main functions of memcached:
Provides shared memory operation functions to save and read data
What the two have in common:
What they have in common: both provide shared memory operation functions, which can be used to save and read your own data
The difference between the two:
eAccelerator exists as an extension library of PHP, so it can only operate and read and write shared memory when PHP is running. In general, it can only be operated by the shared memory. The program calls itself.
At the same time, eAccelerator can cache the execution code of PHP programs to improve the loading and execution speed of the program.
Memcached is mainly used as a shared memory server, and its PHP extension library only exists as a connection library from PHP to memcached, similar to the MySQL extension library. Therefore, memcached can be completely separated from PHP, and its shared data can be called by different programs.
According to the difference between the two, we use them where they are really needed:
eAccelerator is mainly used to speed up stand-alone PHP and cache intermediate data. It is very practical when real-time performance is high but the amount of data operations is small.
Memcached is used in distributed or cluster systems, and multiple servers can share data. It is very practical when real-time performance is high and the amount of data operations is large.
Correct understanding of MemCached
At first, I heard that MemCached is used to cache data into memory and then operate on the data (the operations here include query and update), which sounds really great. In this way, there is no need to operate the database for a certain period of time. It's so good.
Then I have been thinking about a question. Querying is indeed possible, but how to handle concurrency when updating memory? Could it be that our MemCached has such a function? If so, that would be amazing.
But things are not as they say. This understanding of MemCached is incorrect.
MemCache is the same as other caches. When the data is updated, the cached things are the out date things.
After reading it online, the explanations of MemCached by seniors further illustrate this point.
So, you should not expect to directly update MemCached and omit the database link.
I thought that the set method he provided was used to update the database. It was my wishful thinking at that time.
In fact, this method is to cache the records in the database into MemCached and specify its validity period.
Now I think about why, the content in our MemCached has not changed, even if I have deleted the record.
When we set(), we did not set its expiration time, so the default is 0, which means it will never expire. As long as the MemCached server is not restarted, it will always exist.
In this way, in our ROR project, we use caching to reduce database retrieval, and we cannot expect that MemCached will save us from updating the database.
If you really don’t even need to update the database, you will really have entered the non-database era, haha. Probably unlikely. If we can ensure that users come in a queue, one after another.
Let’s find another way to reduce the pressure caused by updates.