This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (25)
- # cljsrn (3)
- # clojars (4)
- # clojure (37)
- # clojure-news (1)
- # clojure-poland (2)
- # clojure-uk (6)
- # clojurescript (9)
- # datomic (2)
- # duct (7)
- # fulcro (2)
- # hoplon (3)
- # jobs (1)
- # keechma (1)
- # luminus (1)
- # off-topic (27)
- # om (3)
- # om-next (2)
- # overtone (4)
- # pedestal (2)
- # re-frame (11)
- # reagent (4)
- # reitit (6)
- # rum (2)
- # shadow-cljs (212)
- # slack-help (3)
- # spacemacs (1)
- # sql (1)
- # test-check (32)
- # tools-deps (1)
Has anyone formulated a "Moore's Law" of open-source/cloud-computing? I.e. something of the following form: take any app that takes > 1M lines of code within 18 months, open-source/cloud-computing would have advanced enough that you can build it in half the lines of code
But technically, once "any app" has been written, you can hide it behind a single function call, which is way less than half the code size
Or are you thinking about a genuine elimination of inefficiency in lines of code? Like some half life toward an optimal minimal expression?
I'm thinking about this in the context of startups. Say you take Uber, AirBnb, Pinterest, Snapchat, Dropbox, ... then we consider: 1. how many liens of code would it take using today's cloud computing + open source libs 2. in 18 months, re-evaluate
In particular: 1. aws lambda + dynamodb/aurora goes a really long way 2. as aws lambda gets more popular, there will probably be libraries that consists entirely of lambda util functions ==> it seems with these higher abstractions, the same can be achieved using less code
I do think there is a particular property, actually, where there's efficiencies of scale with code infrastructure, where it takes less code to accomplish larger problems, as the problem size grows. Similar to the metabolic scaling laws of biological functions.
Such that there is a correspondence to the allometry of organic structures and the polynomial time/space cost of algorithms and data structures.
I have no idea of how many lines of code are involved in implementing aws lambda + dynamodb / aurora, but in fairness shouldn't those lines of code be counted in the new total? Or you consider them now to be "nearly free infrastructure" because it is shared amongst enough applications?
@andy.fingerhut: Valid point concerning lines used to implement Lambda / Dynamodb / AWS. I don't count those since I want to measure "If you were starting company X today, how many lines of code would you have to write?"
I'd even say LOC is a bad metric (as usual) 😛 are there even reliable numbers if "company X is successful" matches to "company X has written a lot of code" and "did they have to?"
really bad example: Didn't Twitter start with Rails? So they didn't write a lot of code until they had scaling problems. then they wrote a lot of code to scale whereas maybe rewriting it from scratch would've taken less LOC, but absolutely more risk. Also on what metric is Twitter successful? Do they even make noticeable amounts of money?
Problem is, once these small services are built they become the news standard. New solutions have to innovate to take on the old established services.
So I would argue that in general innovative apps have remained at about the same level of person hours over the past two decades.
Specifically it’s the amount of work that can be accomplished by about 5 devs in about one year. More devs doesn’t speed things up, and more time just creates a higher chance of failure.
A year in my experience is about enough time to get a new idea off the ground and usable.
So all these innovations just increase the amount of complexity that can be tackled with these constraints.
30 years ago people were building QuickBooks was a doss app written by some people in their garage. Today it’s possible to write the same app in less time, but you’ll spend more time with bank integrations, automatic billing, etc
that's some very good points - but if you're looking at "company scale" my last gig told me that a second team of also 5 people can indeed speed up things if you don't have to do everything yourself. or make it 2x4 - this can be the old dev vs ops split or just building 2 key features at the same time (or frontend/backend, whatever)
If our hypothetical 2nd implementation team is told simply to clone the first product, then you’ll get time savings by not having to seek product/market fit, not implementing code that would be deleted later because of customer feedback, not striving for “malleability” in the codebase to support rapid iteration, etc. All of which would be the case even if no other [edit: no newer or better] tools or infrastructure were available to the 2nd team.
hehe, nice idea. A/B testing features, all the time 🙂 I think you could optimize output via feature flags though
Thinking about what @tbaldridge said, it seems to me the reason why innovative apps always seem to consume the same person-hours is because cost itself is the limiting factor. A startup raises a certain amount of money to get the company to some defined milestone. Assuming there is competition in the marketplace then they have to move at a competitive pace and the milestone needs to be differentiated enough to “win”. So the investment is made balancing risks versus potential upside, but that investment is informed by the accepted norms of team size and agility/velocity and so on, and therefore tends to converge to a similar check size.
@tbaldridge Believe me, quickbooks as it is can only have come about as a result of 30 years of hacking away at a dos app made by people in a garage