Article by: Mirko Novakovic
As consultants we have been doing performance analysis and optimization of enterprise applications for more than a decade. In this time we have learned that there are some common “best practices” that everybody should follow, so that most of the problems We’ve seen during my troubleshooting assignments could have been avoided. As the following 10 rules only reflect our experience on a high level, I would be happy to get feedback to extend our list.
1. Define the requirements
It sounds obvious that a team should know what the requirements for performance, scalability and availability are, but in more than 90% of my engagements they didn’t. Sometimes the projects didn’t have non functional requirements at all or something similar to “the application should be fast”. How do you measure “fast”? Is 8 second fast? Or 1 second?
Performance requirements should be defined SMART – like “95% of the Login requests should respond in less than 2 seconds measured on the web server“.
2. Measure don’t guess
This tip is
easy: You need a tool to measure and optimize your code. For Java this can be included tools like jstat or jvisualvm. We prefer a good profiler like JProfiler and an APM solution like AppDynamics for production analysis.
Without a tool you are tuning a blackbox!
3. One thing at a
Nervous managers often think that having lots of developers optimize code will lead to better performance. Wrong! Changing lot of things in parallel can have strange results. Developer A could have done a good optimization but at the same time developer B deploys code that makes things worth – so nobody will see the optimization of developer A.
So, measure your application. Define the possible optimizations and prioritize them by balancing possible optimization and effort/risk of the change. Implement from top of your list one change at a time and measure the possible performance gain.Undo the change when no positive effect on performance can be measured. See rule 5 for a tip when to stop.
In most of the projects the team is under pressure to meet time and budget. This means that developers have no time for performance testing and analysis in most cases. Especially because every change can lead to a decrease of performance. Therefore we would recommend to automate the process of analyzing performance.
What you need is a Continuous Delivery pipeline which includes profiling and performance analysis of acceptance and load tests and a performance report for each build/release – including comparison of performance metrics for two builds. We include AppDynamics in our CD pipeline as it need no configuration (e.g. instrumentation) when changes in the application are deployed.
This will not help you to avoid every performance issue, but it will help you to find the “low hanging fruits” easily and early in the development process, when they are easier to fix than later in production.
5. Only optimize if needed
When is tuning needed? The answer is: When the requirements are not met (see first tip). What this tip means is that we should stop tuning if the requirements are met. In many cases there is a rule that the more you optimize your application, the harder it is and the more you have to change your code. In many cases “better performance” therefore also is in conflict with “better maintainability”. So stop tuning when the requirements are met – even if you have the ambition to optimize the application to the limit.
6. Learn to paralyze
Amdahl’s law is more important than ever. With more CPUs on the boards that have more and more Cores (sometimes even with lower clock speed), we have to develop applications that are able to utilize these architecture. This said, developers have to learn about Threads, Concurrency, Locking etc. to get most performance and throughput out of a system. Sometimes it can also be interesting to look at specialized programming languages like Erlang or Scala for better built-in concurrency support.
7. Learn to scale
There is a evolution in scaling systems in the past years.
Vertical scalability (or scale up) is maybe the easiest way to scale a system – you just put more hardware into one box. But this approach is limited, as normally you cannot put unlimited CPUs and RAM into one box. In many cases (like mainframes) this is also a very expensive approach.
Horizontal scalability (or scale out) means to run the application on more than one box. This approach is more scalable and cheaper but also more complex from an application point of view: You have to deal with clustering, replication of data and new approaches to store and scale data (like MongoDB, neo4j or Riak). Elastic approaches using Virtualization or Cloud infrastructure can be even more complicated because the scale out process is automated and bidirectional – which means that hardware is not only added on demand but also removed if not needed.
Map/Reduce frameworks like Hadoop are kind of a mixture of parallelism and scalability for an application, because you split your application logic into pieces that can be run in parallel on many nodes. This can also help to improve performance and scalability of some problem domains.
So if you need a highly scalable application you have to get familiar with the technologies and impacts of the different scalability approaches and technologies.
8. Cache it
Memory is cheap these days and retrieving data from disks or via network (distributed systems) is still expensive. So caching is one of the things you have to keep in mind when you want to get performance. You can think of caching architecture like an “Onion Skin Model” – the more outside the cache is in the application, the more performance gain you can expect but the more limitation you will get.
Let’ take a web portal as an example where the pages are generated from a web application that retrieves data from a database. When you cache the whole page in a proxy or Content Delivery Network you will get maximum speed, but the limitation is that the content will be “static”. Maybe this is ok or a refresh every 15 minutes (e.g. news portal) will fix this issue but if the content is more dynamic you will have to put the cache more inside the application. This could mean caching the session state in a distributed Cache like Memcached, or put the whole data in a In-Memory database like Redis – or just optimize the database caches for better SQL performance. This will result in less impact (sites are still generated dynamically) but also less speed.
9. Right Abstraction
This is also a simple rule: Don’t be too abstract in your application design.
E.g. don’t create a layer above your database that abstracts away all special features of Oracle/DB2/MSSQL/……this will make your application slow, as you are not using the features you have paid for. Yes, we know you did this to have less effort to replace the database if needed…but believe me, this will not happen in the next 10 years and if it happens you will have other problems…
10. Call an expert
Tuning or scaling an application in a BIG environment can be hard and need experts skills. So don’t try to do it by your own if you do not have the experience yet, as this can be expensive. Try to get an expert who has experience with the type of application you want to build. And expert does not always mean that you have to call an external consultant – it can also be your colleague next door or the DB admin in the basement from the operations team you do not want to talk with 🙂 (see DevOps)