Depends on the requirements. Writing the code in a natural and readable way should be number one.
Then you benchmark and find out what actually takes time; and then optimize from there.
At least thats my approach when working with mostly functional languages. No need obsess over the performance of something thats ran only a dozen times per second.
I do hate over engineered abstractions though. But not for performance reasons.
You need to me careful about benchmarking to find performance problems after the fact. You can get stuck in a local maxima where there is no particular cost center buts it’s all just slow.
If performance specifically is a goal there should probably at least be a theory of how it will be achieved and then that can be refined with benchmarks and profiling.
Writing the code in a natural and readable way should be number one.
I mean, even there it depends what you’re doing. A small matrix multiplication library should be fast even if it makes the code uglier. For most coders you’re right, though.
You can add tons of explanatory comments with zero performance cost.
Also in programming in general (so, outside stuff like being a Quant) the fraction of the code made which has high performance as the top priority is miniscule (and I say this having actually designed high-performance software systems for a living) - as explained earlier by @ForegoneConclusion, you don’t optimize upfront, you optimized when you figure out it’s actually needed.
Thinking about it, if you’re designing your own small matrix multiplication library (i.e. reinventing the wheel) you’re probably failing at a software design level: as long as the licensing is compatible, it’s usually better to get something that already exists, is performance oriented and has been in use for decades than making your own (almost certainly inferior and with fresh new bugs) thing.
PS: Not a personal critical - I too still have to remind myself at times to not just reinvent that which is already there. It’s only natural for programmers to trust their own skills above whatever random people did some library and to want to program rather than spend time evaluating what’s out there.
Thinking about it, if you’re designing your own small matrix multiplication library (i.e. reinventing the wheel)
I thought of this example because a fundamental improvement was actually made with the help of AI recently. 4x4 in specific was improved noticeably IIRC, and if you know a bit about matrix multiplication, that ripples out to large matrix algorithms.
PS: Not a personal critical
I would not actually try this unless I had a reason to think I could do better, but I come from a maths background and do have a tendency to worry about efficiency unnecessarily.
I think in most cases (matrix multiplication being probably the biggest exception) there is a way to write an algorithm that’s easy to read, especially with comments where needed, and still approaches the problem the best way. Whether it’s worth the time trying to build that is another question.
In my experience we all go through a stage at the Designed-Developer level of, having discovered things like Design Patterns, overengineering the design of the software to the point of making it near unmaintainable (for others or for ourselves 6 months down the line).
The next stage is to discover the joys of KISS and, like you described, refraining from premature optimization.
Depends on the requirements. Writing the code in a natural and readable way should be number one.
Then you benchmark and find out what actually takes time; and then optimize from there.
At least thats my approach when working with mostly functional languages. No need obsess over the performance of something thats ran only a dozen times per second.
I do hate over engineered abstractions though. But not for performance reasons.
You need to me careful about benchmarking to find performance problems after the fact. You can get stuck in a local maxima where there is no particular cost center buts it’s all just slow.
If performance specifically is a goal there should probably at least be a theory of how it will be achieved and then that can be refined with benchmarks and profiling.
I mean, even there it depends what you’re doing. A small matrix multiplication library should be fast even if it makes the code uglier. For most coders you’re right, though.
Even then you can take some effort to make it easier to parse for humans.
Oh, absolutely. It’s just the second most important thing.
You can add tons of explanatory comments with zero performance cost.
Also in programming in general (so, outside stuff like being a Quant) the fraction of the code made which has high performance as the top priority is miniscule (and I say this having actually designed high-performance software systems for a living) - as explained earlier by @ForegoneConclusion, you don’t optimize upfront, you optimized when you figure out it’s actually needed.
Thinking about it, if you’re designing your own small matrix multiplication library (i.e. reinventing the wheel) you’re probably failing at a software design level: as long as the licensing is compatible, it’s usually better to get something that already exists, is performance oriented and has been in use for decades than making your own (almost certainly inferior and with fresh new bugs) thing.
PS: Not a personal critical - I too still have to remind myself at times to not just reinvent that which is already there. It’s only natural for programmers to trust their own skills above whatever random people did some library and to want to program rather than spend time evaluating what’s out there.
I thought of this example because a fundamental improvement was actually made with the help of AI recently. 4x4 in specific was improved noticeably IIRC, and if you know a bit about matrix multiplication, that ripples out to large matrix algorithms.
I would not actually try this unless I had a reason to think I could do better, but I come from a maths background and do have a tendency to worry about efficiency unnecessarily.
I think in most cases (matrix multiplication being probably the biggest exception) there is a way to write an algorithm that’s easy to read, especially with comments where needed, and still approaches the problem the best way. Whether it’s worth the time trying to build that is another question.
In my experience we all go through a stage at the Designed-Developer level of, having discovered things like Design Patterns, overengineering the design of the software to the point of making it near unmaintainable (for others or for ourselves 6 months down the line).
The next stage is to discover the joys of KISS and, like you described, refraining from premature optimization.