Vietnam had been an anomaly. That war would have been won, in the opinion of most senior military officers, if the U.S. Armed Forces had been permitted by their civilian masters to fight it like it should have been fought.
C'est indépendant du reste de l'article, mais c'est très intéressant de voir ça, surtout aujourd'hui où les soldats Russes commencent à reprocher au Kremlin de ne pas utiliser toute la force de l'armée Russe et d'être responsable du désastre.
La partie 2 est super intéressante (en fait la partie 1 ne sert qu'à donner du contexte à la partie 2, et elle aurait pu être 10 fois moins longue)
In the face of the 1973 war, the Army would have had two choices: view it through an unobstructed lens in an attempt to understand what might be wrong with its new approach, or utilize the conflict to validate the decisions it had already made. The validation approach is simultaneously more satisfying and less risky than seeing one’s own errors in the mistakes of either side in the war. First, validation shows all the hard work has been paying off. We are on the right track. Second, findings that question the current path put the credibility of the institutions and senior leaders who determined that course at risk. They can also challenge the significant investments made in programs that might be deemed irrelevant to the war’s lessons.
The creation of the U.S. Army Training and Doctrine Command in 1970 would have created strong biases that could have skewed the key lessons from the 1973 War. The reputation of the Army as an institution, not to mention those of its most senior leaders, was at stake. This is not to say that their behavior was disingenuous. It was not. It is, however, a warning that well-meaning leaders who deeply believe in the results of their hard work are hard to convince that their efforts are wrong, even in the face of new evidence. This is particularly true if the new reality could upend hard-won gains in the budget battles or service relevance.
Un exemple tiré de l'entre-deux guerre et du début de la 2ème guerre mondiale:
During the interwar period, the branch chiefs held great authority over their branch’s doctrine, personnel, and materiel requirements. The Air Force had not yet gained its independence and was a branch of the Army. In February 1942, Maj. Gen. John Herr, the U.S. Army chief of cavalry, met with Army Chief of Staff Gen. George Marshall. He truly believed what he told Marshall: “In the interest of National Defense in this crisis, I urge upon you the necessity of an immediate increase in horse cavalry.” From his perch, and with the experience of a full and successful career, Herr viewed horse cavalry as a key reason for German successes in Poland and France. He honestly believed what he told Marshall, and Germany did have cavalry formations. Thus, if you looked for validating observations, you could find them and laud their importance.
The other Army branches also searched for supportive lessons from these early German successes. The chief of infantry highlighted the contributions of German infantry, while the Army Air Corps contended that the strategic bombing of Warsaw had been central to the German victory over Poland. Finally, the chief of the newly formed Armored Force, who was basing his concepts largely on the cavalry tactics he had developed in the 7th Cavalry Brigade (Mechanized), saw the use of tanks by the Germans as a validation of his approach.
They all missed the reality of the blitzkrieg because it combined armor and air power. Indeed, the Armored Force doctrinal manuals did not require air support for operations. Consequently, the Army, less the disbanded horse cavalry, took its existing concepts and weapons into the war where they suffered unnecessarily for their parochial decisions. However, it is important to understand that all of these senior officers believed their validating observations. It would be difficult to expect them to see and believe something that conflicted with what they had spent their careers mastering.
The decision to rush the game’s development seems ludicrous in retrospect, but Allen Adham, the company’s president, was under pressure to grow revenue. While Blizzard’s early games had been far more successful than expected, that just raised expectations for future growth.
Oh, hello, queue theory and parallel processing theory! As you increase utilization (remove slack), throughput may grow, but latency grows more. If your nodes interact and depend on each other, the tail latency can grow completely out of hand, in exchange to tiny gains in utilization, and the actual throughput may go down as utilization goes up past some point.
Always have some free disk space. Do not run your web servers at 100% CPU utilization. Overprovision your clusters. Add redundant network links. These ideas look completely obvious, don't they?
Now apply them at the org level. Always have someone with spare cycles. Leave some time at the end of the sprint when all planned tickets are done. Allocate time during the day to sit and think, or to experiment, or to take a walk. Let the person oncall to mostly sit around and watch the system operate instead of churning out something. Now this sounds a little bit crazy, doesn't it. So much efficiency is left on the table!
Ce commentaire (sur la ré-écriture de Tor en Rust) résume assez bien ce que plein de gens (des chefs notamment …) ne comprennent pas sur l'informatique et le développement en particulier
They know the existing codebase. A large part of writing a codebase is experimentation, the "research" in Research and Development. Once you've solved a problem once, however, you know a lot more about it, allowing much faster progress should you attempt to solve it again.
They have the existing test-suite. It's amazing how much knowledge is condensed in a test-suite, and how much time it takes to accumulate it.
Yesterday, I failed a technical interview. It happens, and that's OK. But this one left me with a pretty bad taste in the mouth: I didn't feel I was rejected because I hadn't met the company's standards, or because I made a mistake, but mostly because I was tested for something completely irrelevant for a developer job.
The technical interview wasn't a conventional one, it was one of the new fancy “automated interviews”, where you log into a website and then you're being given instructions and a tight time limit. When the limit expire, your code is checked against an automated test suite and you're being given a grade.
For the HR's perspective, it sounds awesome: you have a very cheap way to screen dozens of candidates without human interactions. No more false positive, every candidate passing the test must be “technically proficient” and all you have to do next is assess their social skill.
Unfortunately, these kinds of tests are very bad at measuring technical proficiency of a developer. To understand why, you need to understand what a developer's job is about.
A developer's job isn't to write code: the role of a developer is to turn someone's needs (1) into a product (2). We do so by writing code(3) as part of a development team (4).
(1): a big part of a developer's job is to ask questions, to clarify the needs. You want me to write a program that converts a camelCased string to kebab-case? OK, what do you mean by “camelCase”? Does this string: PascalCase counts as “camelCase” in your mind? And how about acronyms? And numbers? And emojis? Are your input always gonna be well-formatted because they have been validated, or do I need to handle the situation when I'm being given a string which isn't “camelCase” in the first place? What do you want the program to do if an error occurs? Send an event to your telemetry system? Print a stack trace? A user-friendly error message? If so, should the message be internationalized? As you can see, there's no such thing as “a program that does X”, it's all about clarifying the stakeholders needs. And there's nothing worse than a developer who starts coding as soon as they think they understand the specification, without asking any questions to be sure that their understanding is aligned with the needs.
(2): a product is more than a piece of code, and sometimes your users' business depends on your product working as intended. How you make sure your code is working, how you monitor when something goes wrong and how you deal with failures in production, are what makes the difference between reliable and bad software.
(4): a developer almost never works alone: reading someone's else code, writing code that is easy to understand and maintain by your coworkers, and using a version control are fundamental skills for any developer.
As you can see, “writing code (3)” is only a small part of our job. And it's the becoming less and less important over the years. Compared to 40 years ago, developers will probably need 80 to 90% less code than before to complete the same feature, thanks to the profusion of high quality open source libraries. And it's likely to shrink even further, with the advance of AI. And sometimes there are tough coding tasks, which need careful design, tradeoffs, sometimes trial & errors and fights with a debugger. That's when you need developers that are good coders. But even for those tasks, measuring whether you can solve trivial problems during a timed test is a poor proxy for how well your developer is going to face difficult challenges.
Mon point de vue sur les microservices, c'est que l'intérêt est organisationnel : c'est très compliqué de travailler à nombreux sur un seul bloc de code (ne serait-ce que pourles timing des déploiements, les branches multiples et pleines de conflits dans gît, etc.). Du coup l'intérêt d'utiliser des «services» c'est de séparer les fonctionnalités entre des équipes plus petites qui n'interagissent les unes avec les autres que via des interfaces définies (et documentée !!!). Si vous avez significativement plus de services que de développeurs, vous faites n'importe quoi ! (Genre les groupes de 5 devs avec 70+ services, allez vous reconvertir dans un autre métier, merci)