The front is two nginx servers N1, N2 uses keepalive for high availability
The backend is a cluster of 4 tomcat T1, T2, T3, and T4 (memcached is used to solve the session sharing problem)
There will be some js/css and other static files in the code, which are easier to handle. The source code can be synchronized among these 6 servers.
The system will upload a large number of files such as pdf/doc and need to be converted into swf format for later preview, so T3 and T4 are used for special processing. Leave other general business to T1 and T2
Then here comes the problem. The client wants to access a pdf or swf file, and the request is sent to nginx. It cannot be intercepted here, because there are only static files such as js/css locally, and there is no doc/pdf, so use location proxy_pass to get there. T3 and T4, but there is only tomcat on T3 and T4, which is definitely not efficient at processing static files. what to do?
I have two ideas:
I don’t know how you solved it?
Is this problem about to sink in? There should be many scenarios for this, and there are ready-made solutions that you can refer to
The simplest way should be to specify a new domain name or subdomain name for the doc/pdf resource, configure nginx, and forward all requests for the new domain name to t3 and t4.
But you have a question here, what if the user uploads a file to t3, and then nginx forwards the request to t4 the next time the file is accessed?
So, is there a synchronization mechanism between t3 and t4?
There are some ways to solve this problem:
1. Use the shared file system as you said, such as samba, and upload the files of t3 and t4 here
2. Manually synchronize files between two machines. The advantage is that the data is equivalent to a backup, and the disadvantages are not mentioned
3. Try using cdn like Youpai, so that even your initial problems can be solved
4. . . .