Geek Efficiency Curve Updated

Geek Efficiency Curve Updated

AI adds another boundary to the famous geek efficiency curve


4 min read

Era of Internet Technology

Below is the famous geek efficiency curve at Internet age (from Internet). The moral is simple: geeks (commonly referring to programmers) spend time build an automation solution so that the effort pays off when task size is big enough.


Here are the fun facts about this:

  • Non-geeks make fun of geek's complicated method as it is put in the above image. I believe everyone reads the annotation.
  • Geeks make fun of non-geeks using this chart -- the often overlooked part is that people who quote this chart assume task size to be big enough to reach the break-even point (red/ blue line intersection).

In my experience (as a programmer / product owner/ business owner), we usually don't have a task size big enough to justify the initial investment on building fully automated solutions. There were a few times when the eventual task size could be big enough, but the business failed on current track and the team had to pivot before original task size reached the break-even point.

I believe many people, either more close to business (non-geek) or more close to technology (geek), have been involved in discussions like feature prioritization. What is must-have? What is good-to-have? Those discussions boil down into different perceptions of the above geek efficiency curve. Depending on where they sit in the team, they may have different prediction of the break-even point, which is unknown until we build and test the product.

Era of GenAI/ LLM

It turns out conversation is so powerful that it embeds the capability of many human tasks in nature. Once the machines can emulate human's taking, they become very powerful assitant.

In the earlier post ChatGPT for Data Wrangling Works, we showed that AI can be handy to solve some easy problems. In the past, we need to talk computer's language, e.g. Python, in order to instruct the machine to solve this problem. Now machine understands our natural language and prepares the data for us directly.

There are two key factors of cost disruption, when compared to the previous regime:

  • We do not see a sharp bump of time spent here. Writing prompt is of course slower than talking to the secretary/ junior, but not so much slower.
  • The marginal cost per task is not negligible because LLM is costly these days and often applies rate limit that is orders magnitude slower than tailor-made scripts.

Fun fact: The cost of AI may not necessarily be lower than the cost of human...

In the post Read Business Card with ChatGPT, I quickly interfaced with ChatGPT to assign it a task of parsing 100 namecards. It is a nice POC, but may not be cost effective to scale. Let's do a quick back of envelope calculation (all assume maximum numbers). Suppose we use GPT-4, which charges $0.03/1K input and $0.06/ 1K output. Suppose we have 4K input (as garbled texts can be) and 1K output (as nice JSON object), the total cost is $0.18. However, one can find human based name card input service that charges about $0.1/ card. Of course, with some clever pre-processing / more compact output format choice/ a cheaper model selection, we can push down the cost of AI per card to be $0.01 level to beat human reader. However, it means more quality human time as initial investment.

The implication here is that, there could be way more alternative solutions to the same problem coming up and they exhibit different cost profile (<human time, AI time>). That's why we genuinely belive that human and machine would form a supply chain of intelligence (On Future of AI) and the key is to establish a market to define exchange rate of the two time.

Suppose there is a unified cost, the geek efficiency curve is updated as below.

Updated Geek Efficiency Curve in AI Era.png

  • Traditional Internet giants bet on the economy of scale, and takes profit of IGH, when the task size is large enough.
  • We may see a lot of emerging SMEs leveraging geek+AI solutions. They take profit by efficiency gain from both non-geek and geek+computer.

What do you think the future would look like for geeks given the disruption brought by AI? Here is the Google Draw version. Feel free to copy and modify to make your comments.

Did you find this article valuable?

Support HU, Pili by becoming a sponsor. Any amount is appreciated!