At a large box retail chain (15 states, ~300 stores) I worked on a project to replace the POS system.
The original plan had us getting everything working (Ha!) and then deploying it out to stores and then ending up with the two oddball "stores". The company cafeteria and surplus store were technically stores in that they had all the same setup and processes but were odd.
When the team that I was on was brought into this project, we flipped that around and first deployed to those two several months ahead of the schedule to deploy to the regular stores.
In particular, the surplus store had a few dozen transactions a day. If anything broke, you could do reconciliation by hand. The cafeteria had single register transaction volume that surpassed a surplus store on most any other day. Furthermore, all of its transactions were payroll deductions (swipe your badge rather than credit card or cash). This meant that if anything went wrong there we weren't in trouble with PCI and could debit and credit accounts.
Ultimately, we made our deadline to get things out to stores. We did have one nasty bug that showed up in late October (or was it early November?) with repackaging counts (if a box of 6 was $24 and if purchased as a single item it was $4.50 ... but if you bought 6 single items it was "repackaged" to cost $24 rather than $27) which interacted with a BOGO sale. That bug resulted in absurd receipts with sales and discounts (the receipt showed you spent $10,000 but were discounted $9,976 ... and then the GMs got alerts that the store was not able to make payroll because of a $9,976 discount ... one of the devs pulled an all nighter to fix that one and it got pushed to the stores ).
I shudder to think about what would have happened if we had tried to push the POS system out to customer facing stores where the performance issues in the cafeteria where worked out first or if we had to reconcile transactions to hunt down incorrect tax calculations.
Which is to say, there is more than one approach to gradual deployment.
Things like the credit card reader (and magnetic ink reader for checks), different input device (sending the barcode scanner two two different systems), keyboard input (completely different screens and keyed entry) would have made those hardware problems also things that needed to be solved.
The old system was a DOS based one where a given set of Fkeys were used to switch between screens on a . Need to do hand entry of a SKU? That was F4 and then type the number. Need to do a search for the description of an item? That was F5. The keyboard was particular to that register setup and used an old school XT (5 pin DIN) plug. The new systems were much more modern linux boxes that used USB plugs. The mag strip reader was flashed to new screens (and the old ones were replaced).
For this situation, it wasn't something that we could send keyboard, scanner, and credit card events to another register.
Scale is separately a Product and Engineering question. You are correct that you cannot scale a Product to delight many users without it first delighting a small group of users. But there are plenty of scaled Engineering systems that were designed from the beginning to reach massive scale. WhatsApp is probably the canonical example of something that was a rather simple Product with very highly scaled Engineering and it's how they were able to grow so much with such a small team.
Technical debt isn’t usually the problem people think it is. When it does become a problem, it’s best to think of it in product-like terms. Does it make the product less useful for its intended purpose? Does it make maintenance or repair inconvenient or costly? Or does it make it more difficult or even impossible to add competitive features or improvements? Taking a product evaluation approach to the question can help you figure out what the right response is. Sometimes it’s no response at all.
That's just a recipe for disaster, "We don't even know if we can handle 100 users, let's now force 1 million people to use the system simultaneously." Even WhatsApp couldn't handle hundreds of millions of users on the day it was first released, nor did it attempt to. You build out slowly and make sure things work, at least if you're competent and sane.
So while they probably didn't bother scaling the service to millions in the first version, they 1) knew what it would take, 2) chose already from the ground up a good technology to have a smoother transition to your "X millions users". The step "X millions to XYZ millions and then billions" required other things too.
At least they didn't have to write a php-to-C++ compiler for Php like Facebook had, given the initial design choice of Mark Zuckeberg, which shows exactly what it means to begin something already with the right tool and ideas in mind.
But this takes skills.
https://news.ycombinator.com/item?id=44911553
Started as PHP, not as Erlang.
> 1) knew what it would take, 2) chose already from the ground up a good technology to have a smoother transition to your "X millions users".
No, as above, that was a pivot. They did not start from the ground up with Erlang or ejabberd, they adopted that later.
I, for example, have always said that I am more than capable of writing code in C that is several orders of magnitude SLOWER than what I could write in.. say Python.
My skillset would never be used as an example of the value of C for whatever
Whatsapp is a terrible example because it's barely a product; Whatsapp is mostly a free offering of goodwill riding on the back of actual products like Facebook Ads. A great example would be a product like Salesforce, SAP, or Microsoft Dynamics. Those products are forced to grow and change and adapt and scale, to massive numbers doing tons of work, all while being actual products and being software systems. I think such products act as stark rebukes of what you've described.
That is vastly easier to achieve by making a small, successful system, which gets buy in from both users and builders to the extent that the former pay sufficient money for the latter to be invested in understanding the entire system and then growing it and keeping up with the changes.
Occasionally a moon shot program can overcome all of that inertia, but the “90% of all projects fail” is definitely overrepresented in large projects. And the Precautionary Principle says you shouldn’t because the consequences are so high.
This is what https://www.amazon.com/How-Big-Things-Get-Done/dp/0593239512 advocates too: start small, modularize, and then scale. The example of Tesla's mega factory was particular enticing.
The uk gov development service reliably implements huge systems over and over again, and those systems go out to tens of millions from day 1. As a rule of thumb, the parts of the uk govt digital suite that suck are the parts the development service haven’t been assigned to yet.
The Swift banking org launches reliable features to hundreds of millions of users.
There’s honestly loads of instances of organisations reliably implementing robust and scalable software without starting with tens of users.
Moreover, their core tech did not evolve that far from that era, and the 70-ies tech bros are still there through their progeniture.
Here's an anecdote: The first messaging system built by SWIFT was text-based, somewhat similar to ASN.1.
The next one used XML, as it was the fad of the day. Unfortunately, neither SWIFT nor the banks could handle 2-3 orders of magnitude increase in payload size in their ancient systems. Yes, as engineers, you would think compressing XML would solve the problem and you would by right. Moreover, XML Infoset already existed, and it defined compression as a function of the XML Schema, so it was somewhat more deterministic even though not more efficient than LZMA.
But the suits decided differently. At one of the SIBOS conferences they abbreviate XML tags, and did it literally on paper and without thinking about back-and-forth translation, dupes, etc.
And this is how we landed with ISO20022 abberviations that we all know and love: Ccy for Currency, Pmt for Payment, Dt for Date, etc.
When you bite off too much complexity at once you end up not shipping anything or building something brittle.
"Nobody should start to undertake a large project. You start with a small trivial project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision. So start small, and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly over-designed. And don't expect people to jump in and help you. That's not how these things work. You need to get something half-way useful first, and then others will say "hey, that almost works for me", and they'll get involved in the project."
-- Linux Times, October 2004.
"All complex systems that work evolved from simpler systems that worked"
I see a lot of software with that initial small scale "baked into it" at every level of its design, from the database engine choice, schema, concurrency handling, internal architecture, and even the form design and layout.
The best-engineered software I've seen (and written) always started at the maximum scale, with at least a plan for handling future feature extensions.
As a random example, the CommVault backup software was developed in AT&T to deal with their enormous distributed scale, and it was the only decently scalable backup software I had ever used. It was a serious challenge with its competitors to run a mere report of last night's backup job status!
I also see a lot of "started small, grew too big" software make hundreds of silly little mistakes throughout, such as using drop-down controls for selecting users or groups. Works great for that mom & pop corner store customer with half a dozen accounts, fails miserably at orgs with half a million. Ripping that out and fixing it can be a decidedly non-trivial piece of work.
Similarly, cardinality in the database schema has really irritating exceptions that only turn up at the million or billion row scale and can be obscenely difficult to fix later. An example I'm familiar with is that the ISBN codes used to "uniquely" identify books are almost, but not quite unique. There are a handful of duplicates, and yes, they turn up in real libraries. This means that if you used these as a primary key somewhere... bzzt... start over from the beginning with something else!
There is no way to prepare for this if you start with indexing the book on your own bookshelf. Whatever you cook up will fail at scale and will need a rethink.
There's a crazy amount of complexity and customizability in systems like ERPs for multinational corporations (SAP, Oracle).
When you start with a small town, you'll have to throw most of everything away when moving to a different scale.
That's true for software systems in general. If major requirements are bolted on after the fact, instead of designed into the system from the beginning, you usually end up with an unmaintainable mess.
After every single project, the org comes together to do a retrospective and ask "What can devs do differently next time to keep this from happening again". People leading the project take no action items, management doesn't hold themselves accountable at all, nor product for late changing requirements. And so, the cycle repeats next time.
I led and effort one time, after a big bug made it to production after one of those crunches that painted the picture of the root cause being a huge complicated project being handed off to offshore junior devs with no supervision, and then the junior devs managing it being completely switched twice in the 8 month project with no handover, nor introspection by leadership. My manager's manager killed the document and wouldn't allow publication until I removed any action items that would constrain management.
And thus, the cycle continues to repeat, balanced on the backs of developers.
I think the reasons this hasn't happened is (a) tech has moved too fast for anyone to actually be able to credibly say how things should be done for longer than a year or two, and (b) attempts at professional organizations borrowed too much from slower-moving physical engineering and so didn't adapt to (a). But I do think it can be done and would benefit the industry greatly (at the cost of slowing things down in the short term). It requires a very 'agile' sense of standards, though.. If standards mean imposing big constraints on development, nobody will pay attention to them.
>a sophisticated attempt at building a professional organization that can spread simple standards which organizations can clearly measure themselves against.
We have that as a form of IEEE, but it really doesn't come up much if you're not already neck deep in the organization.
That's maybe in Europe. Plenty of US developers those days have a litany of ~1-2 year stints at FAANGs and startups du jour in their CV.
> If standards mean imposing big constraints on development, nobody will pay attention to them.
Unless there are penalties for not doing so.
> tech has moved too fast for anyone to actually be able to credibly say how things should be done for longer than a year or two
But that's just it. If things are moving so fast that you can't say how things should be done, then that tells you that the first thing that should be done is to slow everything way down.
For instance by and large the role of organizing to not to get more money but rather to reduce indignities... Wasted work, lack of forethought, bad management, arbitrary layoffs, etc. So it is much more about governing management with good practices than about keeping wages up; at least for now wages are generally high anyway.
there are also reasons to dedend jobs/wages in the face of e.g. outsourcing... But it's almost like a separate problem. Maybe there needs to be both a union and a uncoupled professional standard or something?
>the role of organizing to not to get more money but rather to reduce indignities
agreed. And I think that's why it's going to really start taking hold as we enter year 4 of mass layoffs in the US (because outsourcing). Alongside overwork from the "survivors" and abusive PIPs to keep people on edge.
A lot of the layoffs appear to be about conserving cash for investment in AI. In many cases the jobs that are cut are not backfilled by workers in the US or abroad.
What objective measures would you use?
Though I think Gen Z in general will be making waves in the coming years. They can't even get a foot in the door, so why should they care about "high salaries"?
Waste of my bloody time. Project completed, taking twice as many devs for twice as long, great success, PM promoted. Doesn’t do that basic thing that was the entire point of it. Nobody has ever cared.
Edit to explain why I care: there was a very nice third party utility/helper for our users. We built our own version because “only we can do amazing direct integration with the actual service, which will make it far more useful”. Now we have to support our worse in-house tool, but we never did any amazing direct integration and I guarantee we never will.
170 years ago is 1855.
Operation Just Cause? Desert Storm?
And, depending on how you look at it, the US won the war in Afghanistan and Irak, but lost the peace afterwards.
Shit rolls downhill and there's a lot more fuss when an engineer calls out risks, piss-poor planning, etc. than any actual introspection on why the risks weren't caught sooner or why the planning was piss-poor.
Can we also address the fact that “software spend” is distributed disproportionately to management at all levels and people who actually write the software are nickel and dimed. You’d save billions in spend and boost productivity massively if the management is bare bones and is held accountable like the rest of the folks.
Why let their own credibility get dragged down for a second time, third time, fourth time, etc…?
The first time is understandable but not afterwards.
But I don’t think a self respecting person would do that over and over.
I'll also say the obvious here in Sinclair's quote about salaries: you can indeed pay for someone's self respect.
(Thus commanding a rate similar to a more competent person who doesn’t package it to sell.)
Cy Porter's home inspection videos... jeez. How these "builders" are still in business is mind-blowing to me (as a German). Here? Some of that shit he shows would lead to criminal charges for fraud.
People will do crazy things for just $100. Including literally get fucked in the ass by a stranger.
7 figures? Ho boy. They’ll use way fancier words though for that.
This is why software projects fail. We lowly developers always take the blame and management skates. The lack of accountability among decision makers is why things like the UK Post Office scandals happen.
Heads need to be put on pikes. Start with John Roberts, Adam Crozier, Moya Greene, and Paula Vennells.
That said, I think I would agree with your main concern, there. If they question is "why did the devs make it so that project management didn't work?" Seems silly not to acknowledge why/how project management should have seen the evidence earlier.
Thats what we call blameless culture lol
This leads to higher and higher towers of abstraction that eat up resources while providing little more functionality than if it was solved lower down. This has been further enabled by a long history of rapidly increasing compute capability and vastly increasing memory and storage sizes. Because they are only interacting with these older parts of their systems at the interface level they often don't know that problems were solved years prior, or are capable of being solved efficiently.
I'm starting to see ideas that will probably form into entire pieces of software "written" on top of AI models as the new floor. Where the model basically handles all of the mainline computation, control flow, and business logic. What would have required a dozen Mhz and 4MB of RAM to run now requires TFlops and Gigabytes -- and being built from a fresh start again will fail to learn from any of the lessons learned when it was done 30 years ago and 30 layers down.
To do a new job, build afresh rather than complicate old programs by adding new "features".
I've been managing, designing, building and implementing ERP type software for a long time and in my opinion the issue is typically not the software or tools.
The primary issue I see is lack of qualified people managing large/complex projects because it's a rare skill. To be successful requires lots of experience and the right personality (i.e. low ego, not a person that just enjoys being in charge but rather a problem solver that is constantly seeking a better understanding).
People without the proper experience won't see the landscape in front of them. They will see a nice little walking trail over some hilly terrain that extends for about a few miles.
In reality, it's more like the Fellowship of the Rings trying to make it to Mt Doom, but that realization happens slowly.
And boy to the people making the decisions NOT want to hear that. You'll be dismissed as a naysayer being overly conservative. If you're in a position where your words have credibility in the org, then you'll constantly be asked "what can we do to make this NOT a quest to the top of Mt Doom?" when the answer is almost always "very little".
You are 100% correct. The way I've tried to manage that is to provide info while not appearing to be the naysayer by giving some options. It makes it seem like I'm on board with crazy-ass plan and just trying to find a way to make it successful, like this:
"Ok, there are a few ways we could handle this:
Option 1 is to do ABC first which will take X amount of time and you get some value soon, then come back and do DEF later
Option 2 is to do ABC+DEF at the same time but it's much tougher and slower"
Working teams are good for a project only, then they are destroyed.
When I was in grad school ages ago, my advisor told me to spend a week reading the source code of the system we were working with (TinyOS), and come back to him when I thought I understood enough to make changes and improvements. I also had a copy of the Linux Core Kernel with Commentary that I perused from time to time.
Being able to dive into an unknown codebase and make sense of where the pieces are put together is a very useful skill that too many people just don't have.
It's more about being good at juggling 1000 balls at the same time. It's 99.9% of the time a management problem, not a software problem.
I do not think it is the only reason. The world is complex, but I do think it factors into why software is not treated like other engineering fields.
If we took the same approach to other engineering, we'd be constantly tearing down houses and rebuilding them just because we have better nails now. It sure would keep a lot of builders employed though.
This is almost exactly what happens in some countries.
On the other hand Microsoft and taceboook did collude to keep salaries low. So who knows.
It was more tech companies in collusion than many people realize. 1) Apple and Google, (2) Apple and Adobe, (3) Apple and Pixar, (4) Google and Intel, (5) Google and Intuit, and (6) Lucasfilm and Pixar.
It was settled out of court. One of the plaintiffs was very vocal that the settlement was a travesty of justice. The companies paid less in the settlement than the amount they saved by colluding to keep wages down.
https://www.mercurynews.com/2014/06/19/judge-questions-settl...
Once you've worked in both hardware and software engineering you quickly realize that they only superficially similar. Software is fundamentally philosophy, not physics.
Hardware is constrained by real world limitations. Software isn't except in the most extreme cases. Result is that there is not a 'right' way to do any one thing that everyone can converge on. The first airplane wing looks a whole lot like a wing made today, not because the people that designed it are "real engineers" or any such BS, but because that's what nature allows you to do.
> In distributed systems there is no real shared state (imagine one machine in the USA another in Sweden) where is the shared state? In the middle of the Atlantic? - shared state breaks laws of physics. State changes are propagated at the speed of light - we always know how things were at a remote site not how they are now. What we know is what they last told us. If you make a software abstraction that ignores this fact you’ll be in trouble.[2]
[1]: “The Mess We’re In”, 2014 https://www.youtube.com/watch?v=lKXe3HUG2l4
And yet we scale the shit out of it, shifting limitations further and further. On that scale different problems emerge and there is no single person or even single team that could comprehend this complexity in isolation. You start to encounter problems that have never been solved before.
> Result is that there is not a 'right' way to do any one thing that everyone can converge on.
Are you trying to say there is in hardware? That must be why we have exactly one branch predictor design, lol
> The first airplane wing looks a whole lot like a wing made today, not because the people that designed it are "real engineers" or any such BS, but because that's what nature allows you to do.
"The first function call looks a whole lot like a function call today..."
I'll be that 'well akshually' guy. IIRC the AMD and intel implementations are different enough that spectre/meltdown exploits were slightly different on each manufacturers.
Source: wrote exploits.
> While hardware folks study and learn from the successes and failures of past hardware, software folks do not. People do not regularly pull apart old systems for learning.
For most IT projects, software folks generally can NOT "pull apart" old systems, even if they wanted to.
> Typically, software folks build new and every generation of software developers must relearn the same problems.
Project management has gotten way better today than it was 20 years, so there is definitely some learnings that have been passed on.
If people want to know why Microsoft hated DOS and wanted to kill it with Xenix, then OS/2, then Windows, and then NT it would be vital to know that it only came about as a result of IBM wanting a 16bit source-compatible CP/M which didn’t yet exist. Then, you would likely want to read Dissecting DOS to see what limits were imposed by DOS.
For other stuff, you would start backwards. Take the finished product and ask what the requirements were, then ask what the pain points are, then start digging through the source and flowcharting/mapping it. This part is a must because programs are often too difficult to really grok without some kind of map/chart.
There is likely an entire discipline to be created in this…
Engineering is the intersection of applied sciences, economics and business. The economics aspect is almost never recognized and explains many things. Projects of other disciplines have significantly higher costs and risks, which is why they require a lot more rigor. Taking hardware as example, one bad design decision can sink the entire company.
On the other hand, software has economics that span a much more diverse range than any other field. Consider:
- The capital costs are extremely low.
- Development can be extremely fast at the task level.
- Software, once produced, can be scaled almost limitlessly for very cheap almost instantly.
- The technology moves extremely fast. Most other engineering disciplines have not fundamentally changed in decades.
- The technology is infinitely flexible. Software for one thing can very easily be extended for an adjacent business need.
- The risks are often very low, but can be very high at the upper end. The rigor applied scales accordingly. Your LoB CRUD app going down might bother a handful of people, so who cares about tests? But your flight control software better be (and is) tested to hell and back.
- Projects vary drastically in stacks, scopes and risk profiles, but the talent pool is more or less common. This makes engineering culture absolutely critical because hiring is such a crapshoot.
- Extreme flexibility also masks the fact that complexity compounds very quickly. Abstractions enable elegant higher-level designs, but they mask internal details that almost always leak and introduce minor issues that cause compounding complexity.
- The business rules that software automates are extremely messy to begin with (80K payroll rules!) However, the combination of a) flexibility, b) speed, and c) scalability engender a false sense of confidence. Often no attempt is made at all to simplify business requirements, which is probably where the biggest wins hide. This is also what enables requirements to shift all the time, a prime cause for failures.
Worse, technical and business complexity can compound. E.g. its very easy to think "80K payroll rules linearly means O(80K) software modules" and not "wait, maybe those 80K payroll rules interact with each other, so it's probably a super-linear growth in complexity." Your architecture is then oriented towards the simplistic view, and needs hacks when business reality inevitably hits, which then start compounding complexity in the codebase.
And of course, if that's a contract up for bidding, your bid is going to be unsustainably low, which will be further depressed by the competitive bidding process.
If the true costs of a project -- which include human costs to the end users -- are not correctly evaluated, the design and rigor applied will be correspondingly out of whack.
As such I think most failures, in addition to regular old human issues like corruption, can be attributed to an insufficient appreciation of the economics involved, driven primarily by overindexing on the powers of software without an appreciation of the pitfalls.
Suffice to say, projects are significantly more likely to succeed when the power in the project is held by people who are competent /and/ understand the systems they are working with /and/ understand the problem domain you are developing a solution in. Whether or not they have a title like "engineer" or have a technical degree, or whatever other hallmark you might choose is largely irrelevant. What matters is competency and understanding, and then ultimately accountability.
Most large projects I've been a part of or near lacked all three of these things, and thus were fundamentally doomed to failure before they ever began. The people in power lacked competency and understanding, the entire project team and the people in power lacked accountability, and competency was unevenly distributed amongst the project team.
It may feel pithy, but it really is true that in many large projects the fundamental issue that leads to failure is that the decision makers don't know what they're doing and most of the implementers are incompetent. We can always root cause further to identify the incentive structures in society, and particularly in public/government projects that lead to this being true, but the fact remains at the project level this is the largest problem in my observation.
Because such failures are so common management typically isn’t punished when they do so it’s hard to keep interests inline. And because many producers are run on a cost plus basis there can be a perverse incentive to do a bad job, or at least avoid doing a good one.
Do your two decades of experience cover both sides?
Do you mean the problem of wanting to build something without knowing how/having the skills, to build something?
Yes.
I appreciate both sides and have a wealth of experience in both. The challenge is that all the non-technical problems cannot be solved successfully while lacking a technical understanding. Projects generally don't fail for technical reasons, they fail because they were not set up for success, and that starts with having a clear understanding of requirements, feasibility, and a solid understanding of both the current state and the path to reach your desired outcomes, both politically/financially and technically.
I was an engineer for more than a decade, I've been in Product for nearly a decade, and I'm now a senior manager in Product. I can honestly say that I have the necessary experience to hold strong opinions here and to be correct in those opinions.
You need technical people who can also handle some of the non-technical aspects of a project with the reins of power if you want the project to succeed, otherwise it is doomed by the lack of understanding and competency of those in charge.
There's also the complexity gap. I don't think giving someone access to the Internet Explorer codebase is necessarily going to help them build a better browser. With millions of moving parts it's impossible to tell what is essential, superfluous, high quality, low quality. Fully understanding that prior art would be a years long endeavor, with many insights no doubt, but dubious.
I know a lot of people on here will disagree with me saying this but this is exactly how you get an ecosystem like javascript being as fragmented, insecure, and "trend prone" as the old school Wordpress days. It's the same problems over and over and every new "generation" of programmers has to relearn the lessons of old.
That's why we see every now and then "new" programming paradigms which were once obsolete.
Hardware folks just follow best practices and physics.
They're different problem spaces though, and having done both I think HW is much simpler and easier to get right. SW is often similar if you're working on a driver or some low-level piece of code. I tried to stay in systems software throughout my career for this reason. I like doing things 'right' and don't have much need to prove to anyone how clever I am.
I've met many SW folks who insist on thinking of themselves as rock stars. I don't think I've ever met a HW engineer with that attitude.
It's also hard when the team actually cares, but there are skills you can learn. Early in my career, I got into solving some of the barriers to software project management (e.g., requirements analysis and otherwise understanding needs, sustainable architecture, work breakdown, estimation, general coordination, implementation technology).
But once you're a bit comfortable with the art and science of those, big new challenges are more about political and environment reality. It comes down to alignment and competence of: workers, internal team leadership, partners/vendors, customers, and investors/execs.
Discussing this is a little awkward, but maybe start with alignment, since most of the competence challenges are rooted in mis-alignments: never developing nor selecting for the skills that alignment would require.
Was there any literature or other findings that you came across that ended up clicking and working for you that you can recommend to us?
* The very first thing I read about requirements was Weinberg, and it's still worth reading. (Even if you are a contracting house, with a hopeless client, and you want to go full reactive scrum participatory design, to unblock you for sprints with big blocks of billable hours, not caring how much unnecessary work you do... at least you will know what you're not doing.)
* When interviewing people about business or technical, learn to use a Data Flow Diagram. You can make it accessible to almost everyone, as you talk through it, and answer all sorts of questions, at a variety of levels. There are a bunch of other system modeling tools you can use, at times, but do not underestimate the usefulness and accessibility of a good DFD.
* If you can (or have to) plan at all, find and learn to use a serious Gantt chart centric planning tool (work breakdown, dependencies, resource allocations, milestones), and keep it up to date (which probably includes having it linked with whatever task-tracking tool you use, but you'll usually also be changing it for bigger-picture reasons too). Even if you are a hardware company, with some hard external-dependency milestones, you will be changing things around those unmoveables. And have everyone work from the same source of truth (everyone can see the same Gantt chart and the task
* Also learn some kind of Kanban-ish board for tasking, and have it be an alternative view on the same data that's behind the Gantt view and the tasks/issues that people can/should/are working on at the moment, and anything immediately getting blocked.
* In a rare disruptive startup emergency, know when to put aside Gantt, and fall back to an ad hoc text file or spreadsheet of chaos-handling prioritization that's changing multiple times per day. (But don't say that your startup is always in emergency mode and you can never plan anything, because usually there is time for a Kanban board, and usually you should all share an understanding of how those tasks fit into a larger plan, and trace back to your goals, even if it's exploratory or reactive.)
* Culture of communicating and documenting, in low-friction, high-value, accessible ways. Respect it as team-oriented and professional
* Avoid routine meetings; make it easy to get timely answers and discussion, as soon as possible. This includes reconsidering how accessible upper leadership should be: can you get closer to being responsive to the needs of the work on the project (e.g., if anyone needs a decision from the director/VP/etc., then quickly prep and ask, maybe with an async message, but don't wait for weekly status meeting or to schedule time on their calendar).
* Avoid unnecessary process. Avoid performances.
* People need blocks of time when they can get flow. Sometimes for plowing through a big chunk of stuff that only requires basic competence, and sometimes when harder thinking is required.
* Be very careful with individual performance metrics. Ideally you can incentive everyone to be aligned towards team success, through monetary incentives (e.g., real equity for which they can affect the value) and through culture (everyone around you seems to work as a team, and you like that, and that inspires you). I would even start by asking if we can compensate everyone equally, shun titles, etc., and how close can we practically get to that.
* Be honest about resume-driven-development. It doesn't have to be a secret misalignment. Don't let it be motivated solely as a secret goal of job-hoppers that is then lied about, or it will probably be to the detriment of your company (and also, that person will job-hop, fleeing the mess they made). If you're going to use new resume keyword framework for a project, the whole team should be honest that, say, there's elements of wanting to potentially get some win from it, wanting to trial it for possible greater use and build up organizational expertise in it, and also that it's a very conscious and honest perk for the workers to get to use the new toy.
* Infosec is an unholy dumpster fire, throughout almost the entire field. Decide if you want to do better, and if so, then back it up with real changes, not CYA theatre and what someone is trying to sell you.
* LeetCode frat pledging interviews select for so much misaligned thinking, and also signals that you are probably just more of the same as the low bar of our field, and people shouldn't take you seriously when you try to tell them you want to do things better.
* Nothing will work well if people aren't aligned and honest.
How do you know when to call it quits? How do you know when people are not aligned or honest, or that you are not right for the team, or when the team is not right for the client/project?
How much time is normal for a team/project to get its bearings? (It depends, I know...)
For anyone else who had no idea who was https://en.wikipedia.org/wiki/Gerald_Weinberg (also known as Jerry Weinberg) also his blog is still online https://secretsofconsulting.blogspot.com/2012/09/agile-and-d...
I guess that’s the real problem I have with SV’s endemic ageism.
I was personally offended, when I encountered it, myself, but that’s long past.
I just find it offensive, that experience is ignored, or even shunned.
I started in hardware, and we all had a reverence for our legacy. It did not prevent us from pursuing new/shiny, but we never ignored the lessons of the past.
Not at all. The mistake to learn from in Webvan's case was expanding too quickly and investing in expensive infrastructure all before achieving product-market fit. Not that they delivered groceries.
Also, your understanding of evolution is incorrect. All species on Earth are the results of an enormous amount of accumulated "experience", over periods of up to billions of years. Even the bacteria we have today took hundreds of millions of years to reach anything similar to their current form.
The only thing that seems to change this is consequences. Take a random person and just ask them to do something, and whether they do it or not is just based on what they personally want. But when there's a law that tells them to do it, and enforcement of consequences if they don't, suddenly that random person is doing what they're supposed to. A motivation to do the right thing. It's still not a guarantee, but more often than not they'll work to avoid the consequences.
Therefore if you want software projects to stop failing, create laws that enforce doing the things in the project to ensure it succeeds. Create consequences big enough that people will actually do what's necessary. Like a law, that says how to build a thing to ensure it works, and how to test it, and then an independent inspection to ensure it was done right. Do that throughout the process, and impose some kind of consequence if those things aren't done. (the more responsibility, the bigger the consequence, so there's motivation commensurate with impact)
That's how we manage other large-scale physical projects. Of course those aren't guaranteed to work; large-scale public works projects often go over-budget and over-time. But I think those have the same flaw, in that there isn't enough of a consequence for each part of the process to encourage humans to do the right thing.
If there was sufficient consequence for this stuff, no one would ever take on any risk. No large works would ever even be started because it would be either impossible or incredibly difficult to be completely sure everything will go to plan.
So instead we take a medium amount of caution and take on projects knowing it's possible for them to not work out or to go over budget.
Ah finally - I've had to scroll halfway down to find a key reason big software projects fail.
<rant>
I started programming in 1990 with PL/1 on IBM mainframes and for 35 years have dipped in and out of the software world. Every project I've seen fail was mainly down to people - egos, clashes, laziness, disinterest, inability to interact with end users, rudeness, lack of motivation, toxic team culture etc etc. It was rarely (never?) a major technical hurdle that scuppered a project. It was people and personalities, clashes and confusion.
</rant>
Of course the converse is also true - big software projects I've seen succeed were down to a few inspired leaders and/or engineers who set the tone. People with emotional intelligence, tact, clear vision, ability to really gather requirements and work with the end users. Leaders who treated their staff with dignity and respect. Of course, most of these projects were bland corporate business data ones... so not technically very challenging. But still big enough software projects.
Gez... don't know why I'm getting so emotional (!) But the hard-core sofware engineering world is all about people at the end of the day.
I completely agree. I would just like to add that this only works where the inspired leaders are properly incentivized!
https://en.wikipedia.org/wiki/Auburn_Dam
https://en.wikipedia.org/wiki/Columbia_River_Crossing
If you're 97% over budget, are you successful? https://en.wikipedia.org/wiki/Big_Dig
I don't like this as a metric of success, because who came up with the budget in the first place?
If they did a good job and you're still 97% over then sure, not successful.
But if the initial budget was a dream with no basis in reality then 97% over budget may simply have been "the cost of doing business".
It's easier to say what the budget could be when you're doing something that has already been done a dozen times (as skyscraper construction used to be for New York City). It's harder when the effort is novel, as is often the case for software projects since even "do an ERP project for this organization" can be wildly different in terms of requirements and constraints.
That's why the other comment about big projects ideally being evolutions of small projects is so important. It's nearly impossible to accurately forecast a budget for something where even the basic user needs aren't yet understood, so the best way to bound the amount of budget/cost mismatch is to bound the size of the initial effort.
FWIW I have read The Phoenix Project and it did help me get a better understanding of "Agile" and the DevOps mindset but since it's not something I apply in my work routinely it's hard to keep it fresh.
My goal is to try and install seeds of success in the small projects I work on and eventually ask questions to get people to think in a similar perspective.
Unix was an effort to take Multics, an operating system that had gotten too modular, and integrate the good parts into a more unified whole (book recommendation: https://www.amazon.com/UNIX-History-Memoir-Brian-Kernighan/d...).
Even though there were some benefits to the modularity of Multics (apparently you could unload and replace hardware in Multics servers without reboot, which was unheard of at the time), it was also its downfall. Multics was eventually deemed over-engineered and too difficult to work with. It couldn't evolve fast enough with the changing technological landscape. Bell Labs' conclusion after the project was shelved was that OSs were too costly and too difficult to design. They told engineers that no one should work on OSs.
Ken Thompson wanted a modern OS so he disregarded these instructions. He used some of the expertise he gained while working on Multics and wrote Unix for himself (in three weeks, in assembly). People started looking over Thompson's shoulder being like "Hey what OS are you using there, can I get a copy?" and the rest is history.
Brian Kernighan described Unix as "one of" whatever Multics was "multiple of". Linux eventually adopted a similar architecture.
More here: https://benoitessiambre.com/integration.html
What would be a competitor to linux that is also FOSS? If there's none, how do you assess the success or otherwise of Linux?
Assume Linux did not succeed but was adopted, how would that scenario look like? Is the current situation with it different from that?
*BSD?
As for large, successful open source software: GCC? LLVM?
- Define "success" early on. This usually doesn't mean meeting a deadline on time and budget. That is actually the start of the real goal. The real success should be determined months or years later, once the software and processes have been used in a production business environment.
- Pay attention to Conways Law. Fight this at your peril.
- Beware of the risk of key people. This means if there is a single person who knows everything, you have a risk if they leave or get sick. Redundancy needs to be built into the team, not just the hardware/architecture.
- No one cares about preventing fires from starting. They do care about fighting fires late in the project and looking like a hero. Sometimes you just need to let things burn.
- Be prepared to say "no", alot. (This is probably the most important one, and the hardest.)
- Define ownership early. If no one is clearly responsible for the key deliverables, you are doomed.
- Consider the human aspect as equally as the technical. People don't like change. You will be introducing alot of change. Balancing this needs to be built into the project at every stage.
- Plan for the worst, hope for the best. Don't assume things will work the way you want them to. Test _everything_, always.
[Edit. Adding some items.]
As a Californian, I hate this mentality so much. Why can't we just have a smooth release with minimal drama because we planned well? Maybe we could properly fix some tech debt or even polish up some features if we're not spending the last 2 months crunching on some showstopper that was pointed out a year ago.
Your best bet is a 500 dollar GDC vault that offers relative scraps of a schematic and making your own from those experiences.
And although that, in itself, should be scary enough, it is nothing compared to the political tsunami and unrest it will bring in its wake.
Most of the Western world is already on shaky political ground, flirting with the extreme-right. The US is even worse, with a pathologically incompetent administration of sociopaths, fully incapable of coming up with the measures necessary to slow down the train of doom careening out of control towards the proverbial cliff of societal collapse.
If the societal tensions are already close to breaking point now, in a period of relative economical prosperity, I cannot start to imagine what they will be like once the next financial crash hits. Especially one in the multi trillion of dollars.
They say that humanity progresses through episodes of turmoil and crisis. Now that we literally have all the knowledge of the world at our fingertips, maybe it is time to progress past this inadequate primeval advancement mechanism, and to truly enter an enlightened age where progress is made from understanding, instead of crises.
Unfortunately, it looks like it's going to take monumental changes to stop the parasites and the sociopaths from making at quick buck at the expense of humanity.
So basically things will still go where they were always going to go, just a lot faster. That's not necessarily a bad thing.
you're placing a lot of faith on this if-statement. in an article pretty much say that we in fact lack strong discipline and quality control.
The article is sound, but it's focus on large public failures disregards the vast, vast, vast majority of the universe of software projects that nobody really thinks about, because they mostly just work -- websites and mobile apps and games and internal LoB CRUD apps and cloud services and the huge ecosystem of open source projects and enterprise and hobby software.
Without some consideration of that, we cannot really generalize this article to reflect the "success rate" of our industry.
That said, I think the acceleration introduced by AI is overall a "Good Thing (tm)" simply because, all else being equal, it's generally better to fail faster rather than later.
In practice, it will make people even less care or pay attention. These big disasters will be written by people without any skills using AI.
This is not a hypothetical, this is based on reports using large-scale data like DORA and DX: https://blog.robbowley.net/2025/11/05/findings-from-dxs-2025...
Edited to add: To clarify, I meant that if an organization was going to deliver a billion-dollar boondoggle of a project, AI will not change that outcome, but it WILL help deliver that faster. Which is why I meant it's not necessarily a bad thing, because as in software, it's generally better to fail faster.
[1] https://www.amazon.com/-/en/dp/B0B63ZG71H
[2] https://www.scribd.com/document/826859800/How-Big-Things-Get...
I would think cloud-disconnectedness (eg. computers without cloud hosted services) would come far before de-computerization.
Then in 2010s they spent $185M on a customized version of IBM's PeopleSoft that was managed directly by a government agency https://en.wikipedia.org/wiki/Phoenix_pay_system
And now in 2020s they are going to spend $385M integrating an existing SaaS made by https://en.wikipedia.org/wiki/Dayforce
That's probably one of the worst and longest software failures in history.
Then when Harper came in he killed the registry mostly for ideological reasons.
But then he didn't want to destroy a bunch of jobs in Miramichi, so he gave them another project to turn into a fiasco.
I have “magical moments” with these tools, sometimes they solve bugs and implement features in 5 minutes that I couldn’t do in a day… at the same time, quite often they are completely useless and cause you to waste time explaining things that you could probably just code yourself much faster.
Granted, this is an exceedingly hard problem, and I suppose there's some value in reminding ourselves of it - but I'd much rather read thoughts on how to do it better, not just complaints that we're doing it poorly.
The second problem are big con.
So, what's the point here, exactly? "Only licensed engineers as codified by (local!) law are allowed to do projects?" Nah, can't be it, their track record still has too many failures, sometimes even spectacularly explosive and/or implosive ones.
"Any public project should only follow Best Practices"? Sure... "And only make The People feel good"... Incoherent!
Ehhm, so, yeah, maybe things are just complicated, and we should focus more on the amount of effort we're prepared to put in, the competency (c.q. pay grade) of the staff we're willing to assign, and exactly how long we're willing to wait prior to conceding defeat?
Large scale systems tend to fail. large centralised and centrally managed systems with big budgets and large numbers of people who need to coordinate, lots of people with an interest in the project pushing and lobbying for different things.
Multiple smaller systems is usually a better approach, where possible. Not possible for things like transport infrastructure, but often possible for software.
It depends what you define as a system. Arguably a lot of transport infrastructure is a bunch of small systems linked with well-understood interfaces (e.g. everyone agrees on the gauge of rail that's going to be installed and the voltage in the wires).
Consider how construction works in practice. There are hundreds or thousands of workers working on different parts of the overall project and each of them makes small decisions as part of their work to achieve the goal. For example, the electrical wiring of a single train station is its own self-contained system. It's necessary for the station to work, but it doesn't really depend on how the electrical system is installed in the next station in the line. The electricians installing the wiring make a bunch of tiny decisions about how and where the wires are run that are beyond the ability of someone to specify centrally - but thanks to well known best practices and standards, everything works when hooked up together.
Sounds miserable.
Also, LLMs don't learn. :)
Software is not the same as building in the physical world where we get economies of scale.
Building 1,000 bridges will make the cost of the next incremental bridge cheaper due to a zillion factors, even if Bridge #1 is built from sticks (we'll learn standards, stable, fundamental engineering principles, predicable failure modes, etc.) we'll eventually reach a stable, repeatable, scalable approach to build bridges. They will very rarely (in modernity) catastrophically fail (yes, Tacoma Narrows happened but in properly functioning societies it's rare.)
Nobody will say "I want to build a bridge upside-down, out of paper clips and can withstand a 747 driving over it". Because that's physically impossible. But nothing's impossible in software.
Software isn't scalable in this way. It's not scalable because it doesn't have hard constraints (like the laws of physics) - so anything goes and can be in scope; and since writing and integrating large amounts of code is a communication exercise, suffers from diseconomies of scale.
Customers want the software to do exactly what they want and - within reason - no laws of physics are violated if you move a button or implement some business process.
Because everyone wants to keep working the way they want to work, no software project (even if it sounds the same) is the same. Your company's bespoke accounting software will be different than mine, even if we are direct competitors in the same market. Our business processes are different, org structures are different, sales processes are different, etc.. So they all build different accounting software, even if the fundamentals (GaaP, double-entry bookkeeping, etc.) are shared.
It's also the same reason why enterprise software sucks - do you think that a startup building expense management starts off being a giant mess of garbage? No! IT starts off simple and clean and beautiful because their initial customer base (startups) are beggars and cannot be choosers, so they adapt their process to the tool. But then larger companies come along with dissimilar requirements and, Expense Management SaaS Co. wins that deal by changing the product to work with whatever oddball requirements they have, and so on, until the product essentially is a bunch of config options and workflows that you have to build yourself.
(Interestingly, I think these products become asymptotically stuck - any feature you add or remove will make some of your customers happy and some of your customers mad, so the product can never get "better" globally).
We can have all the retrospectives and learnings we want but the goal - "Build big software" - is intractable, and as long as we keep trying to do that, we will inevitably fail. This is not a systems problem that we can fix.
The lesson is: "never build big software".
(Small software is stuff like Bezos' two pizza team w/APIs etc. - many small things make a big thing)
I am surprised on the lack of creativity when doing these projects. Why don't they start 5 small projects building the same thing and let them work for a year. At the end of the year you cancel one of the projects, increasing the funding in the other four. You can do that every year based on the results. It may look like a waste but it will significantly increase your chances of succeeding.
Build 1000 JSON parsers and tell me if the next one isn't cheaper to develop with "(we'll learn standards, stable, fundamental engineering principles, predicable failure modes, etc.)"
>Software isn't scalable in this way. It's not scalable because it doesn't have hard constraints (like the laws of physics)
Uh, maybe fewer but none is way to far. Get 2 billion integer operations per second out of a 286, the 500 mile email, big data storage, etc. Physical limits are everywhere.
>It's also the same reason why enterprise software sucks.
The reason enterprise software sucks is because the lack of introspection and learning from the garbage that went before.
Most advertising campaigns fail.
Lots to break down in this article other than this initial quotation, but I find a lot of parallels in failing software projects, this attitude, and my recent hyper-fixation (seems to spark up again every few years), the sinking of the Titanic.
It was a combination of failures like this. Why was the captain going full speed ahead into a known ice field? Well, the boat can't sink and there (may have been) organizational pressure to arrive at a certain time in new york (aka, imaginary deadline must be met). Why wasn't there enough life jackets and boats for crew and passengers? Well, the boat can't sink anyway, why worry about something that isn't going to happen? Why train crew on how to deploy the life rafts and emergency procedures properly? Same reason. Why didn't the SS Californian rescue the ship? Well, the 3rd party Titanic telegraph operators had immense pressure to send telegrams to NY, and the chatter about the ice field got on their nerves and they mostly ignored it (misaligned priorities). If even a little caution and forward thinking was used, the death toll would have been drastically lower if not nearly nonexistent. It took 2 hours to sink, which is plenty of time to evacuate a boat of that size.
Same with software projects - they often fail over a period of multiple years and if you go back and look at how they went wrong, there often are numerous points and decisions made that could have reversed course, yet, often the opposite happens - management digs in even more. Project timelines are optimistic to the point of delusion and don't build in failure/setbacks into schedules or roadmaps at all. I've had to rescue one of these projects several years ago and it took a toll on me I'm pretty sure I carry to this day, I'm wildly cynical of "project management" as it relates to IT/devops.
But the rest of your comment reveals nothing novel other than anyone would find after watching James Cameron's movie multiple times.
I suggest you go to the original inquiries (congressional in the US, Board of trade in the UK). There is a wealth of subtle lessons there.
Hint: Look at the Admiralty Manual of Seamanship that was current at that time and their recommendations when faced with an iceberg.
Hint: Look at the Board of Trade (UK) experiments with the turning behaviour of the sister ship. In particular of interest is the engine layout of the Titanic and the attempt by the crew, inexperienced with the ship, to avoid the iceberg. This was critical to the outcome.
Hint: Look at the behaviour of Captain Rostron. Lots of lessons there.
For instance, software in safety-critical systems is highly rigorously developed. However that level of investment does not make sense for run-of-the-mill internal LOB CRUD apps which constitute the vast majority of the dark matter of the software universe.
Software engineering is also nothing special when it comes to various failure modes, because you'll find similar examples in other engineering disciplines.
I commented about this at length a few days ago: https://news.ycombinator.com/item?id=45849304
You update the system for one small piece, while reconciling with the larger system. Then replace other pieces over time, broadening your scope until you have improved the entire system. There is no other way to succeed without massive pain.
There are no generic, simple solutions for complex IT challenges. But there are ground rules for finding and implementing simple solutions. I have created a playbook to prevent IT diasasters, The art and science towards simpler IT solutions see https://nocomplexity.com/documents/reports/SimplifyIT.pdf
absent understanding, large companies engage in cargo cult behaviors: they create a sensible org chart, produce a gannt chart, have the coders start whacking code, presumably in 9 months a baby comes out.
every time, ugly baby
translation: "leave it to us professionals". Gate-keeping of this kind is exactly how computer science (the one remaining technical discipline still making reliable progress) could become like all of the other anemic, cursed fields of engineering. people thinking "hey im pretty sure I could make a better version of this" and then actually doing it is exactly how progress happens. I hope nobody reads this article and takes it seriously
I trust my phone to work so much that it is now the single, non-redundant source for keys to my apartment, keys to my car, and payment method. Phones could only even hope to do all of these things as of like ~4 years ago, and only as of ~this year do I feel confident enough to not even carry redundancies. My phone has never breached that trust so critically that I feel I need to.
Of course, this article talks about new software projects. And I think the truth and reason of the matter lies in this asymmetry: Android/iOS are not new. Giving an engineering team agency and a well-defined mandate that spans a long period of time oftentimes produces fantastic software. If that mandate often changes; or if it is unclear in the first place; or if there are middlemen stakeholders involved; you run the risk of things turning sideways. The failure of large software systems is, rarely, an engineering problem.
But, of course, it sometimes is. It took us ~30-40 years of abstraction/foundation building to get to the pretty darn good software we have today. It'll take another 30-40 years to add one or two more nines of reliability. And that's ok; I think we're trending in the right direction, and we're learning. Unless we start getting AI involved; then it might take 50-60 years :)
1. Connecting pay to work - estimates (replanning is learning, not failure)
2. Connecting work to pay - management (the world is fractal-like, scar tissue and band-aids)
I do not pre-suppose that there are definite solutions to these problems - there may be solutions, but getting there may require going far out of our way. As the old farmer said "Oh, I can tell you how to get there, but if I was you, I wouldn't start from here"
1. Pay to Work - someone is paying for the software project, and they need to know how much it will cost. Thus estimates are asked for, an architecture is asked for, and the architecture is tied to the estimates.
This is 'The Plan!'. The project administrators will pick some lifecycle paradigm to tie the architecture to the cost estimate.
The implementation team will learn as they do their work. This learning is often viewed as failure, as the team will try things that don't work.
The implementation team will learn that the architecture needs to change in some large ways and many small ways. The smallest changes are absorbed in regular work. Medium and Large changes will require more time (thus money); This request for more money will be viewed as a failure in estimation and not as learning.
2. Work to Pay - as the architecture is implemented, development tasks are completed. The Money People want Numbers, because Money People understand how they feel about Numbers. Also these Numbers will talk to other Numbers outside the company. Important Numbers with names like Share Price.
Thus many layers of management are chartered and instituted. The lowest layer of management is the self-managed software developer. The software developer will complete development tasks related to the architecture, tied to the plan, attached to the money (and the spreadsheets grew all around, all around [0]).
When the developer communicates about work, the Management Chain cares to hear about Numbers, but sometimes they must also involve themselves in failures.
It is bad to fail, especially repeated failures at the same kind of task. So managers institute rules to prevent failures. These rules are put in a virtual cabinet, or bureau. Thus we have Rules of the Bureau or Bureaucracy. These rules are not morally bad or good; not factually incorrect or correct, but whenever we notice them, they feel bad; We notice the ones that feel bad TO US. We are often in favor of rules that feel bad to SOMEONE ELSE. You are free to opt out of this system, but there is a price to doing so.
----
Too much writing, I desist from decoding verbiage:
Thus it is OK for individuals to learn many small things, but it is a failure for the organization to learn large things. Trying to avoid and prevent failure is viewed as admirable; trying to avoid learning is self-defeating.
----
0. https://www.google.com/search?q=the+green+grass+grew+all+aro...
> git commit -am "decomposing recapitulating and recontextualizing software development bureaucracy" && git push
Somehow I come away skeptical of the inevitable conclusion that Phoenix was doomed to fail and instead that perhaps they were hamstrung by architecture constraints dictated by assholes.
https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...
Payroll systems seem to be a massively complicated beast.
Now in the new project they put together a committee to attempt it
> The main objective of this committee also includes simplifying the pay rules for public servants, in order to reduce the complexity of the development of Phoenix's replacement. This complexity of the current pay rules is a result of "negotiated rules for pay and benefits over 60 years that are specific to each of over 80 occupational groups in the public service." making it difficult to develop a single solution which can handle each occupational groups specific needs.
Any time you think about touching them, the people who get those salaries come out in droves and no one else cares so government has every incentive to leave them alone.
No single person is going to understand all of the history and legality involved, or be able to represent the people on all sides of this mess.
Yes, this means discussion, investigation, almost certainly months of effort to find something that works, and lots of compromise. That's how adults deal with complex situations.
1. Enable grift to cronies
2. Promo-driven culture
3. Resume-oriented software architecture
Software is also incredibly hard, the human mind can understand the physical space very well but once we're deep into abstractions it simply struggles to keep up with it.
It is easier to explain how to build a house from scratch to virtually anyone than a mobile app/Excel.
It's leadership and accountability (well, the lack of them).
So we added a language and cultural barrier, 12 hour offset, and thousands of miles of separation with outsourcing.
Software was failing and mismanaged.
So now we will take the above failures, and now tack on an AI "prompt engineering" barrier (done by the above outsourced labor).
And on top of that, all engineers that know what they are doing are devalued from the market, all the newer engineers will be AI braindead.
Everything will be fixed!
In the same way that hardware improvements are quickly gobbled up by more demanding software.
The people doing the programming will also be more removed technically. I can do Python, Java , Kotlin. I can do a little C++ ,less C, and a lot less assembly.
First, we as a society should really be scrutinizing what we invest in. Trillions of dollars could end homelessness as a rounding error.
Second, real people are going to be punished for this as the layoffs go into overdrive, people lose their houses and people struggle to have enough to eat.
Third, the ultimate goal of all this investment is to displace people from the labor pool. People are annoying. They demand things like fair pay, safe working conditions and sick leave.
Who will buy the results of all this AI if there’s no one left with a job?
Lastly, the externalities of all this investment are indefensible. For example, air and water pollution and rising utility prices.
We’re bouldering towards a future with a few thousand wealthy people where everyone else lives in worker housing, owns nothing and is the next incarnation of brick kiln workers on wealthy estates.
Over the course of a few years (so as to not drive up the price of politicians too quickly) one could buy the top N politicians from most countries. From there on out your options are many.
After a decade or so you can probably have your trillion back.
> Phoenix project executives believed they could deliver a modernized payment system, customizing PeopleSoft’s off-the-shelf payroll package to follow 80,000 pay rules spanning 105 collective agreements with federal public-service unions. It also was attempting to implement 34 human-resource system interfaces across 101 government agencies and departments required for sharing employee data.
So basically people -- none of them in IT, but rather working for the government -- built something extraordinarily complex (80k rules!), and then are like wow, it's unforeseen that would make anything downstream at least equally as complex. And then the article blames IT in general. When this data point tells us that replacing a business process that used to require (per [1]) 2,000 pay advisors to perform will be complex. While working in an organization that has shit the bed so thoroughly that paying its employees requires 2k people. For an organization of 290k, so 0.6% of headcount is spent on paying employees!
IT is complex, but incompetent people and incompetent orgs do not magically become competent when undertaking IT projects.
Also too, making extraordinarily complex things they shouting the word "computer" at them like you're playing D&D and it's a spell does not make them simple.
[1] https://www.oag-bvg.gc.ca/internet/English/parl_oag_201711_0...
* Formatting
* Style
* Conventions
* Patterns
* Using the latest frameworks or whats en-vogue
I think where I've seen results delivered effectively and consistently is where there is a universal style enforced, which removes the individualism from the codebase. Some devs will not thrive in that environment, but instead it makes the code a means-to-the-end, rather than being-the-end.
I'd consider managing that stuff essentially table-stakes in big orgs these days. It doesn't stop projects from failing in highly expensive and visible ways.