A young computer scientist and two colleagues show that searches within data structures called hash tables can be much faster than previously deemed possible.
Very sus. This article is not technically counvincing at all. Probably a few students who dream of inventing the next big thing. Hash tables complexity was mathematically proven to be generally optimal. If you think you optimized it it’s because your tests don’t reflect the common real world
The paper was published by IEEE and with professors as co-authors. Only the second author is a student. And I wouldn’t dismiss it out of hand like that because of a magazine article. Students come up with breakthroughs all the time.
The paper itself says it disproves Yao’s conjecture. I personally plan to implement and benchmark this because the results seem so good. It could be another fibonacci heap situation, but maybe not. Hash tables are so widely used, that it might even be worthwhile to make special hardware to use this on servers, if our current computer architecture is only thing that holds back the performance.
I have skimmed the paper “Tiny Pointers”, it seems they use several levels of tables “Load balancing” and “overflow” to store the hashes, and they claim that they can save space/time by splitting up the hash in a part that can be computed and a unique part.
It doesn’t look real groundbreaking, more like a tool that has it’s applications in some places.
Very sus. This article is not technically counvincing at all. Probably a few students who dream of inventing the next big thing. Hash tables complexity was mathematically proven to be generally optimal. If you think you optimized it it’s because your tests don’t reflect the common real world
The paper was published by IEEE and with professors as co-authors. Only the second author is a student. And I wouldn’t dismiss it out of hand like that because of a magazine article. Students come up with breakthroughs all the time. The paper itself says it disproves Yao’s conjecture. I personally plan to implement and benchmark this because the results seem so good. It could be another fibonacci heap situation, but maybe not. Hash tables are so widely used, that it might even be worthwhile to make special hardware to use this on servers, if our current computer architecture is only thing that holds back the performance.
Edit: author sequence
I have skimmed the paper “Tiny Pointers”, it seems they use several levels of tables “Load balancing” and “overflow” to store the hashes, and they claim that they can save space/time by splitting up the hash in a part that can be computed and a unique part.
It doesn’t look real groundbreaking, more like a tool that has it’s applications in some places.