100% compatible avec les commandes git existantes, via bg git
bg undo & bg redo possible à toute étape !
une UX de CLI moins WTF que git
des choix par défauts qui ont du sens (git diff --word-diff par exemple1 … git log -p)
tous les fichiers qui ne sont pas ignorés sont traqués automatiquement (pas de différence absurde de traitement entre les fichiers dont les «Modifications qui ne seront pas validées» et un fichier «Fichiers non suivis» tout court. À moins d'être exclus explicitement (via une commande (bg skip ?), indépendemment du .gitignore)
des commandes pour éviter d'avoir à gérer le .gitignore (bg ignore) et .git/info/exclude (bg exclude)
nice to have :
un git blame qui ne galère pas avec les merge commit
des choix par défaut qui encouragent les bonne pratiques (rebase interactif avant de push, git add -p)
“There is no magic money tree,” as Theresa May put it during the snap election of 2017
Tiens, c'est de là que ça vient le «il n'y a pas d'argent magique» de Macron.
Before long, the Bank of England (the British equivalent of the Federal Reserve, whose economists are most free to speak their minds since they are not formally part of the government) rolled out an elaborate official report called “Money Creation in the Modern Economy,” replete with videos and animations, making the same point: existing economics textbooks, and particularly the reigning monetarist orthodoxy, are wrong. The heterodox economists are right. Private banks create money. Central banks like the Bank of England create money as well, but monetarists are entirely wrong to insist that their proper function is to control the money supply. In fact, central banks do not in any sense control the money supply; their main function is to set the interest rate—to determine how much private banks can charge for the money they create. Almost all public debate on these subjects is therefore based on false premises. […] Back in the UK, the immediate media response was simply silence. The Bank of England report has never, to my knowledge, been so much as mentioned on the BBC or any other TV news outlet. Newspaper columnists continued to write as if monetarism was self-evidently correct. Politicians continued to be grilled about where they would find the cash for social programs. It was as if a kind of entente cordiale had been established, in which the technocrats would be allowed to live in one theoretical universe, while politicians and news commentators would continue to exist in an entirely different one.
C'est tellement ça!
Robert Skidelsky's Money and Government: The Past and Future of Economics
=> je le rajoute à ma liste de livres à lire.
To put it bluntly: QTM is obviously wrong. Doubling the amount of gold in a country will have no effect on the price of cheese if you give all the gold to rich people and they just bury it in their yards, or use it to make gold-plated submarines (this is, incidentally, why quantitative easing, the strategy of buying long-term government bonds to put money into circulation, did not work either). What actually matters is spending.
In England, the pattern was set in 1696, just after the creation of the Bank of England, with an argument over wartime inflation between Treasury Secretary William Lowndes, Sir Isaac Newton (then warden of the mint), and the philosopher John Locke. Newton had agreed with the Treasury that silver coins had to be officially devalued to prevent a deflationary collapse; Locke took an extreme monetarist position, arguing that the government should be limited to guaranteeing the value of property (including coins) and that tinkering would confuse investors and defraud creditors. Locke won. The result was deflationary collapse. A sharp tightening of the money supply created an abrupt economic contraction that threw hundreds of thousands out of work and created mass penury, riots, and hunger. The government quickly moved to moderate the policy (first by allowing banks to monetize government war debts in the form of bank notes, and eventually by moving off the silver standard entirely), but in its official rhetoric, Locke’s small-government, pro-creditor, hard-money ideology became the grounds of all further political debate.
According to Skidelsky, the pattern was to repeat itself again and again, in 1797, the 1840s, the 1890s, and, ultimately, the late 1970s and early 1980s, with Thatcher and Reagan’s (in each case brief) adoption of monetarism. Always we see the same sequence of events:
(1) The government adopts hard-money policies as a matter of principle.
(2) Disaster ensues.
(3) The government quietly abandons hard-money policies.
(4) The economy recovers.
(5) Hard-money philosophy nonetheless becomes, or is reinforced as, simple universal common sense.
Ever since Hume, economists have distinguished between the short-run and the long-run effects of economic change, including the effects of policy interventions. The distinction has served to protect the theory of equilibrium, by enabling it to be stated in a form which took some account of reality. In economics, the short-run now typically stands for the period during which a market (or an economy of markets) temporarily deviates from its long-term equilibrium position under the impact of some “shock,” like a pendulum temporarily dislodged from a position of rest. This way of thinking suggests that governments should leave it to markets to discover their natural equilibrium positions. Government interventions to “correct” deviations will only add extra layers of delusion to the original one.
There is a logical flaw to any such theory: there’s no possible way to disprove it. The premise that markets will always right themselves in the end can only be tested if one has a commonly agreed definition of when the “end” is; but for economists, that definition turns out to be “however long it takes to reach a point where I can say the economy has returned to equilibrium.” (In the same way, statements like “the barbarians always win in the end” or “truth always prevails” cannot be proved wrong, since in practice they just mean “whenever barbarians win, or truth prevails, I shall declare the story over.”)
In fact, there’s absolutely no reason a modern state should fund itself primarily by appropriating a proportion of each citizen’s earnings. There are plenty of other ways to go about it. Many—such as land, wealth, commercial, or consumer taxes (any of which can be made more or less progressive)—are considerably more efficient, since creating a bureaucratic apparatus capable of monitoring citizens’ personal affairs to the degree required by an income tax system is itself enormously expensive. But this misses the real point: income tax is supposed to be intrusive and exasperating. It is meant to feel at least a little bit unfair. Like so much of classical liberalism (and contemporary neoliberalism), it is an ingenious political sleight of hand—an expansion of the bureaucratic state that also allows its leaders to pretend to advocate for small government.
C'est une réflexion intéressante sur l'impôt sur le revenu.
The one major exception to this pattern was the mid-twentieth century, what has come to be remembered as the Keynesian age. It was a period in which those running capitalist democracies, spooked by the Russian Revolution and the prospect of the mass rebellion of their own working classes, allowed unprecedented levels of redistribution—which, in turn, led to the most generalized material prosperity in human history. The story of the Keynesian revolution of the 1930s, and the neoclassical counterrevolution of the 1970s, has been told innumerable times, but Skidelsky gives the reader a fresh sense of the underlying conflict.
The counterrevolutionaries, starting with Keynes’s old rival Friedrich Hayek at the LSE and the various luminaries who joined him in the Mont Pelerin Society, took aim directly at this notion that national economies are anything more than the sum of their parts. Politically, Skidelsky notes, this was due to a hostility to the very idea of statecraft (and, in a broader sense, of any collective good). National economies could indeed be reduced to the aggregate effect of millions of individual decisions, and, therefore, every element of macroeconomics had to be systematically “micro-founded.”
One reason this was such a radical position was that it was taken at exactly the same moment that microeconomics itself was completing a profound transformation—one that had begun with the marginal revolution of the late nineteenth century—from a technique for understanding how those operating on the market make decisions to a general philosophy of human life. It was able to do so, remarkably enough, by proposing a series of assumptions that even economists themselves were happy to admit were not really true: let us posit, they said, purely rational actors motivated exclusively by self-interest, who know exactly what they want and never change their minds, and have complete access to all relevant pricing information. This allowed them to make precise, predictive equations of exactly how individuals should be expected to act.
Surely there’s nothing wrong with creating simplified models. Arguably, this is how any science of human affairs has to proceed. But an empirical science then goes on to test those models against what people actually do, and adjust them accordingly. This is precisely what economists did not do. Instead, they discovered that, if one encased those models in mathematical formulae completely impenetrable to the noninitiate, it would be possible to create a universe in which those premises could never be refuted. (“All actors are engaged in the maximization of utility. What is utility? Whatever it is that an actor appears to be maximizing.”) The mathematical equations allowed economists to plausibly claim theirs was the only branch of social theory that had advanced to anything like a predictive science (even if most of their successful predictions were of the behavior of people who had themselves been trained in economic theory).
Here they were able to take advantage of certain undeniable weaknesses in Keynesian formulations, above all its inability to explain 1970s stagflation, to brush away the remaining Keynesian superstructure and return to the same hard-money, small-government policies that had been dominant in the nineteenth century. The familiar pattern ensued. Monetarism didn’t work; in the UK and then the US, such policies were quickly abandoned. But ideologically, the intervention was so effective that even when “new Keynesians” like Joseph Stiglitz or Paul Krugman returned to dominate the argument about macroeconomics, they still felt obliged to maintain the new microfoundations.
There is a paradox here. On the one hand, the theory says that there is no point in trying to profit from speculation, because shares are always correctly priced and their movements cannot be predicted. But on the other hand, if investors did not try to profit, the market would not be efficient because there would be no self-correcting mechanism….
Secondly, if shares are always correctly priced, bubbles and crises cannot be generated by the market….
This attitude leached into policy: “government officials, starting with [Federal Reserve Chairman] Alan Greenspan, were unwilling to burst the bubble precisely because they were unwilling to even judge that it was a bubble.” The EMH made the identification of bubbles impossible because it ruled them out a priori.
At this point, we have come full circle. After such a catastrophic embarrassment, orthodox economists fell back on their strong suit—academic politics and institutional power. In the UK, one of the first moves of the new Conservative-Liberal Democratic Coalition in 2010 was to reform the higher education system by tripling tuition and instituting an American-style regime of student loans. Common sense might have suggested that if the education system was performing successfully (for all its foibles, the British university system was considered one of the best in the world), while the financial system was operating so badly that it had nearly destroyed the global economy, the sensible thing might be to reform the financial system to be a bit more like the educational system, rather than the other way around. An aggressive effort to do the opposite could only be an ideological move. It was a full-on assault on the very idea that knowledge could be anything other than an economic good.
Even at the height of the eventual recovery, in the fifth-richest country in the world, something like one British citizen in twelve experienced hunger, up to and including going entire days without food. If an “economy” is to be defined as the means by which a human population provides itself with its material needs, the British economy is increasingly dysfunctional.
Economic theory as it exists increasingly resembles a shed full of broken tools. This is not to say there are no useful insights here, but fundamentally the existing discipline is designed to solve another century’s problems. The problem of how to determine the optimal distribution of work and resources to create high levels of economic growth is simply not the same problem we are now facing: i.e., how to deal with increasing technological productivity, decreasing real demand for labor, and the effective management of care work, without also destroying the Earth. This demands a different science. The “microfoundations” of current economics are precisely what is standing in the way of this. Any new, viable science will either have to draw on the accumulated knowledge of feminism, behavioral economics, psychology, and even anthropology to come up with theories based on how people actually behave, or once again embrace the notion of emergent levels of complexity—or, most likely, both.
À quand des peines de prison pour ce genre de négligence ?
Je parle sérieusement, ce genre de faille de sécurité a essentiellement deux origines :
une mauvaise conception du système (pas de chiffrement, pas d'authentification, ou une implémentation foireuse de l'un des deux) : c'est donc bien de la négligence.
des problèmes de corruption de la mémoire : c'est très dur à éviter si on fait du C (même les meilleurs se font avoir), mais l'avantage c'est que ce genre de chose ne peut arriver que si on utilise du C pour le système embarqué du pacemaker. Et elle est là la négligence ! Pourquoi donc utiliser ce langage si souvent source de faille alors qu'on peut utiliser du Java pour l'embarqué depuis les années 1990 (y compris sur du matériel très peu puissant : c'est notamment le cas sur les cartes à puces). Java n'est pas le langage le plus sexy du monde (mais le C non plus de toute façon) mais au moins il est memory safe donc il évite ce genre de failles …
L'autre question, c'est : comment on fait pour savoir qu'on doit faire une mise à jour de son pacemaker ? Ma grand-mère de 88 ans ne lis sûrement pas la section pixel du monde.fr. Déjà, elle ne sait pas utiliser internet et je doute même qu'elle sache ce que le mot «mise-à-jour» signifie. Pas sûr non plus qu'elle connaisse son modèle de pacemaker, même si c'est sûrement indiqué quelque part sur des papiers qui lui ont été remis après son implantation. Que faire dans ces cas là pour s'assurer que les mises à jour seront faite ? Faire une grande campagne à la télé pour dire à tout le monde que certains modèles de pacemaker sont vulnérable et qu'il faut d'urgence aller voir son cardiologue pour qu'il fasse la mise à jour ? Avec tout le stress inutile que ça causerait aux patients, et la saturation des cabinets de cardiologie, pas sûr que ça en vaille la peine. Laisser les pacemaker troués dans la nature sans mise à jour ? Plus facile, mais est-ce bien responsable ? Sans compter la lenteur des processus d’homologation des patchs (la FDA mis plus d'un an à l'accepter pour les États-Unis, et en Europe ce n'est toujours pas le cas jusqu'en novembre).
Face à la difficulté de faire ce genre de mise à jour, il faut que les fabricants soient irréprochables lors de la conception, et pour qu'ils le soient il faut que des sanctions soit prises quand ils font n'importe quoi. Si un chirurgien décidait d'opérer un de ses patients avec un vieux couteau rouillé de 1972, il irait probablement en prison. Pourquoi pas un fabricant de pacemaker qui utilise un vieux langage de 1972 ?
Cet article est vraiment ridicule, je le note ici pour pouvoir revenir dessus dans 2 ou 3 ans, on se marrera bien.
Mon morceau préféré reste quand même celui-là :
Enfin, et surtout, le contexte politique s’est retourné. Le Brexit, au lieu d’enclencher un effet domino, a conduit à endiguer, au moins provisoirement, la vague populiste. La reconquête a commencé avec la défaite du leader Geert Wilders aux Pays-Bas et s’est poursuivie avec la victoire d’Emmanuel Macron à l’élection présidentielle française. L’euroscepticisme obsessionnel qui frappait le Vieux Continent et les observateurs anglo-saxons, toujours prompts à jouer les cassandres sur l’euro, n’est plus de mise.
57% de popularité dont 53% pour la France, c'est mieux que l'an dernier, mais on reste très loin de la popularité de l'Europe d'avant le référendum pour la constitution (67% de convaincus par l'Europe en France en 2004, source).
Détail amusant : le Royaume-Uni est à 49% de pro-européen, et pourtant il vient de choisir de quitter le navire. C'est comme ça en démocratie, dès qu'on passe en dessous de 50% d'avis favorable, on a un risque non-nul de se faire dégager par les urnes. Pas de quoi s'enflammer sur la mort de l'Euro-secpticisme, quand on a 10 pays sur 27 (hors UK) qui sont en dessous du seuil de danger.
mais [la France ] a une occasion pour combler l’écart de compétitivité avec l’Allemagne, qui s’était creusé à partir de 2004. C’est la condition évoquée par Philipp Hildebrand [aujourd’hui vice-président du fonds d’investissement BlackRock, premier gestionnaire de capitaux au monde] pour que se réalise la « décennie dorée » qu’il prédit.
Tiens donc, donc pour avoir la décennie dorée qu'il espère, notre ami le grand capitaliste attend qu'on «comble l'écart de compétitivité», ce qui est en novlangue le moyen détourner de dire «détruire la protection sociale et le droit du travail» : en quelque sort M. Hildebrand espère une décennie de profit du capital sur le dos des citoyens, c'est vrai que ce n'est pas irréaliste comme pari malheureusement …