For Turkish version click here


1. Introduction

In the social production of their existence, men inevitably enter into definite relations, which are independent of their will, namely relations of production appropriate to a given stage in the development of their material forces of production. The totality of these relations of production constitutes the economic structure of society, the real foundation, on which arises a legal and political superstructure and to which correspond definite forms of social consciousness. The mode of production of material life conditions the general process of social, political and intellectual life. It is not the consciousness of men that determines their existence, but their social existence that determines their consciousness. At a certain stage of development, the material productive forces of society come into conflict with the existing relations of production or this merely expresses the same thing in legal terms with the property relations within the framework of which they have operated hitherto. From forms of development of the productive forces these relations turn into their fetters. Then begins an era of social revolution. The changes in the economic foundation lead sooner or later to the transformation of the whole immense superstructure. (Marx et al. [1978], Preface)

What distinguishes a utopian approach to social transformation from a materialist one is that the latter must start with the real contradictions that exist between technological imperatives and the social forms that currently exist. These specify not a future that might be desired, but what may be required.

One has therefore to start with technology complexes and demographics since all social formations combine a particular set of technologies with a particular density of human population. Only some technology complexes are compatible with a given population density. Our current population could not survive on the basis of pastoralism, so much is obvious. But nor can the present population long survive on the basis of an extractive fossil fuel economy.

Contemporary capitalism is heavily dependent on fossil fuels. Almost 90% of world primary energy comes from these sources and the percentage coming from nuclear and renewable sources has if anything tended to fall slightly in recent years. Industry and commerce use about 60% of all primary energy, transport and residential use around 20% each.

The transition to what Marx termed a new metabolism with nature means that the system of production must be one subject to conscious planning rather than just a self organised response to market demand. Advocacy of conscious planning in kind has been a preoccupation of mine for almost forty years now and is most prominently expressed in Towards a New Socialism[1]. In this article I will look at some of the computer engineering and computer algorithm problems that I had researched to provide the background to that book and how I have now returned to them to address the issue of environmental planning.

2. Engineering phase

My first response to the issue of socialist calculation was to think of it in terms of the technical engineering problems involved. From late 1980 to the summer of 82 I was working on my PhD. The research topic was how to incorporate the concept of data persistence into high level compiled languages in such a way as to allow it to work in a distributed networked environment.

Because of this background and also because I was a Marxist economist, a fellow PhD student from China, a CPC member and still in those days a Maoist, asked if I would be willing to come to work in the planning ministry in Beijing and introduce some of the techniques we had developed in Edinburgh to that ministry. As it happened, despite me agreeing, my Chinese comrade discovered that although foreign CS experts were being invited to China, the planning ministry was out of bounds for foreigners. When I was asked to go to China I started thinking about how the technology of persistence could be applied to planning an economy for a country as huge as China. The first point that struck me is that we were going to need a much large scale of distributed computing than anything we had experimented with. If one looked forward to a future automated Chinese economy the planning system would need to coordinate data originating in hundreds of millions of computers. It would have to integrate this into a single vast shared database that could coordinate the whole of social production.

So from about 1983 to 1988 I set about designing a set of computers that would, I hoped, be suitable for the task. In this sense my first reaction was very like that of Glushkov[2, 3] the Soviet computer engineer who proposed a nationwide computer network to accelerate the creation of a moneyless communist economy[1]. I took it to imply that we would need a computer architecture with a very much larger shared virtual address space than those available on 1980s computers, which maxed out at 32-bit virtual address space. Furthermore, remember this was before the collapse of the USSR, and socialism still seemed to be winning on a world scale, it struck me as obvious that a design for socialist planning should in principle be extendible to worldwide planning, for a day in the future when the number of computers rivalled the numbers of people.

The population of the world, around 4 billion, in those days would already exhaust 32 bits if there was one computer per person.

The first machine architecture we designed, PSM[5] had a 128-bit address space made up of a 48 bit host number, a 48 bit local object number and an o set of up to 32 bits within the object. That is to say, individual objects could be up to 4 Gigabytes in size.

This would have allowed each computer to hold 248 objects individually ranging in size from Lisp cells to 4GB text les. It then allowed for 248 such machines to be networked around the world – the assumption being that some form of satellite communications system would be used for the required internet. Objects would migrate via a worldwide network from the source machine to any machine that had a copy of the Host/LON combination. This is very similar to the somewhat later concept of a URL used in the WWW, but with the difference that the identifiers were seen as being binary rather than textual. The intention was to create a unified world wide planning database fed by data entry at every point of production or point of consumption.

As a point of comparison, current state of the art machines only have a 64-bit address space and do not have the direct capacity to perform cross network addressing at the machine architecture level.

Fig. 2.1: The 96 bit address space Poppy computer of 1985 developed at Glasgow University. This was seen as a prototype workstation for networked economic planning

We were in close collaboration with ICL with the view to the PSM being a successor architecture to the 3900. The single 128-bit accumulator in the PSM design and other aspects of the machine, already was there on the 3900. The segment registers we proposed were longer than those on the 3900. The intention was to prototype the PSM by microcode changes to an existing series 3900.

ICL supplied us with an early 3900 machine at Glasgow University to which the team had moved. But obstacles were placed in the way of accessing the microcode so the research platform at Glasgow was switched to a new machine Poppy, built in collaboration with Acorn[6]. This had a 96 bit address space, the reduction in space being due to allowing only 216 machines on a network.

The intention was to prove the viability of the basic machine architecture before going on to built a full 128 bit system. This machine was actually built and successfully tested, but in the end, Acorn chose to go with their own 32 bit ARM processor – which has subsequently been very successful.

By the late 80s, after other attempts to build wide address machines[7, 8], it was becoming clear that my previous confidence in the stability and growth of world socialism was unsound. In China, planning was being downgraded relative to the market, and the same seemed about to occur in the USSR. It was not enough to work on engineering solutions to the problem of socialist planning, what was needed was a reply to market theorists at the level of political economy.

The basic economics of our book can be seen as a long elaboration on Marx’s Critique of the Gotha Programme. But we needed to situate our update of Marx against, on the one hand, the background both of the Austrian capital theorists von Mises[9, 10, 11] and Hayek[12, 13] and on the other the Soviet optimal planning theorists like Kantorovich[14, 15].

3. Plans and computability

Starting with Von Mises, conservative economists argued that effective socialist planning was impossible because:

  • There was no effective cost metric in absence of market.
  • The complexity of calculation was too great, the millions of equations argument. This is also repeated by Nove.

Let us look at these in turn.

Von Mises argued that without a market one could not cost things and thus had no rational basis for deciding between production alternatives. The one exception that he allowed was the use of Labour Values. These he said could in principle act as a cost metric, but he thought them impractical to compute. I will return to this.

The millions of equations argument already seemed obsolete in the 1960s[16]. Computers obviously changed things as they can, provided their address space is large enough, readily solve millions of equations. But we need to be quite precise about how many million equations are to be handled and just how hard they are to solve. This topic is a branch of algorithmic complexity theory.

The complexity of an algorithm is measured by the number of instructions used to compute it as the size of a problem grows. It is taken as a fundamental result of comping theory that the inherent difficulty of a problem does not alter as you use more advanced machines. A faster computer will shrink all problems by some constant factor, but the complexity order of a given problem is the same for all models of computer.

We express complexity in what is known as big O notation.

If we say a problem is On[2] we mean that its runtime grows as the square of the number of data elements n that it handles. If it is Oen, it is exponential in the data size, etc. If a problem’s runtime grew as n2 on a 1980 model computer, it will still grow as n2 on a 2019 computer. So, let us look at two classes of problems from this standpoint.

  1. Computing labour values of all products in an economy.
  2. Constructing a consistent and optimal (or near-optimal) 5-year plan for an economy.
3.1. Input-output data

The economist Leontief working in the USA, but drawing on Soviet experience developed a systematic way of representing the ow structure of an economy[17]. He prepared tables showing how much of each product flowed from one industrial sector to another. The resulting matrix provides an abstract model of economic flows and can be used in economic planning[18]. When used in the West the tables are usually measured in money quantities, but in a socialist economy, using material balances the cells of the table can be in physical units. Some capitalist countries have also published physical tables[19], albeit less detailed than the information that was collated by GOSPLAN in the USSR.

Fig. 3.1: The structure of an input output table. Each column represents the production process of an industrial sector. Each cell Ai, jrepresents the ow of good i into the production of industry j.

Given a sufficiently detailed table, one is then in a position to compute the labour value of any product listed in it, since one row of the table is typically the labour input. Again there is some variation in how labour inputs are included. Western tables usually just give money spent on wages for the labour input row, but Sweden, for example, publishes tables in hours.

There is an extensive literature[20, 21, 22, 23, 24, 25, 26] now computing the direct and indirect labour content of different industry’s outputs using IO tables. The motivation has often been to test the labour theory of value, but the basic math being used is the same as would have to be employed were an economy being planned in terms of labour time.

3.2. Complexity

The data in input output tables can be used as input for Sra a[27] type or Morishima[28] type labour value determination equations. So the question we must ask is: how rapidly does the computational complexity of solving this sort of linear equation grow with the number of products in an economy?

The two most basic techniques for solving this type of equation set: matrix inversion and Gaussian elimination have complexity orders of n3 for compute time and n2 for computer memory requirements. Are these tractable?

Well in general computer scientists would say that an n3problem was tractable, but for sufficiently large n they can still be expensive. How big would n be?

Nove claimed that the number of products, n in our terms, for the USSR was of the order of 107 so that would imply that the number of words of computer memory required would be 1014 and the number of mathematical operations would be of the order of 1021. Top end supercomputers can achieve 1017 operations per second2, implying the calculation would take of the order of 3 hours on such a machine.

But this is overkill. There are two big improvements that one can make in the algorithmic techniques that bring the problem into the range of what can be achieved on much lower performance machines.

Fig. 3.2: Experiment in timing solving for labour values. X axis is number of products in the economy. Y axis gives time and the number of inputs per output. Timings were done on a 4 core Intel(R) Celeron(R) CPU N3450 @ 1.10GHz.
  1. One can use iterative Jacobi solvers which have a limiting complexity of order n2.
  2. One can exploit the fact that in highly disaggregated input output matrices are sparse. We hypothesised in Towards a New Socialism that the number of non zero elements in a large input output matrix will grow as order nlogn. Study of disaggregated tables seems to bear this out[29].

If you combine a Jacobi solver with sparse matrices you can reduce the number of operations and the memory requirements to the extent that a much more modest computer could calculate labour values in a few minutes for a whole economy. As an example see Figure 3.2 which shows that a modern entry level PC can solve the equations for labour values for a 9 million product system in about 3 minutes. Computing labour values is thus in essence a computationally trivial task.

4. From I/O tables to 5-year plans

The input-output method is all defined in terms of flows. If you look at an input-output table everything is a ow per year of product Y into the production process of industry X.

Now if you think back to the way the labour theory of value was defined and defended by Ricardo, it applies to freely reproducible commodities. The labour values defined by Morishima are likewise predicated on free reproducibility, he assumes that you can adjust the intensities of operation of individual industries so long as the result does not cause any product to have a net negative production and that the total labour use does not exceed the workforce available.

Most importantly, both the input-output formalism and labour values ignore constraints that are imposed by existing stocks of capital goods ( long-lived means of production ). As a ow formalism, long-lived means of production like ships or trucks only appear in the table in the form of the ow of replacement trucks or ships needed to sustain the existing productive capacity material depreciation and its replacement.

Labour values, calculated on this basis, express the long run equilibrium social cost of making something, because in the long run, once capital stocks have been adjusted, everything is reproducible by labour – direct or indirect.

But when changing the structure of the economy, in addition to labour you need to adjust the actual capital stocks available to you, and the existing stocks this year constrain what you are able to produce next year. Long term planning has to be able to take this sort of stock constraint into account.

One, albeit short term, approach is to use linear programming as Kantorovich did and to derive from this a set of what he termed Objective Valuations, which, over the short term will diverge from labour values. Kantorovich held that his Objective Valuations, would in the long term converge on labour values, but that in the short run they were useful to ensure that the best use was made of currently available plant and equipment.

I arrived at the Harmony approach[30] whilst thinking over how to come up with better algorithm for solving Kantorovich’s problem. As it happens I was in Budapest in 89 looking to nd a Hungarian publisher/translator for the work the later became Towards a New Socialism and came across a Hungarian planning text[31] in English and thinking about it, I thought of an improved algorithm based on ideas from neural net theory. It should, therefore, be thought of as filling the same role as linear programming in the Soviet optimal planning school, but with better algorithmic performance.

To use the Harmony algorithm or Kantorovich’s method you need additional information not provided in the input-output formalism. Consider rst the problem of constructing a single year plan. You need

  1. A flow matrix or flow I/O table.
  2. A corresponding capital stock matrix, specifying the amount of machine Y needed to produce and annual ow of P of output x.
  3. A depreciation matrix specifying how fast each type of capital good depreciates in each of its uses.
  4. A target vector of net outputs for the current period.

Given this information you can then apply either Kantorovich or the harmony method to construct a plan.

Kantorovich style linear programming packages are available for Linux and Windows. A free example is lp_solve. Open source software has been released[3]to use the Kantorovich or the Harmony approach to compute plans from such specifications.

The package allows you to experiment with macroeconomic planning. Using it you can compute multiyear plans for either toy economies, or in principle, it can be applied to compute sectoral plans for whole economies using input output tables. It is intended to be run in a Linux environment, from the command line. Figure 4.1 shows runs of plan construction using the Kantorovich technique. It is clear from the slopes of the plots on the log scale graph that the technique is of order n3in the number of industries planned.



Fig. 4.1: A comparison of the performance of the lp_solve and Harmony based plan solvers. Note that the Y axis is a log scale. Tests run on an AMD A12 with a 1Ghz core clock using Ubuntu under VirtualBox with 2 cores available.

This essentially limits the Kantorovich method to macroeconomic or sectoral planning. If one is to construct fully disaggregated plans as suggested in Towards a New Socialism, you need algorithms of lower complexity. Figure 4.1 compares the Kantorovich approach with the Harmony algorithm. The important point to note is that the Harmony time plots have significantly lower complexity allowing much larger problems to be handled, even on a small computer. On a laptop it was found impractical to construct 5-year plans broken down to more than 100 industries when using the Kantorovich approach. The execution times simply grew too fast.

With the harmony algorithm, the actual plan computation for a 5-year 500 industry plan was a few seconds. The limiting factor was the generation and handling of large textual spreadsheets with the appropriate sparseness to test the algorithm.

The software as it stands is proof of concept. It could be used to perform macroeconomic planning breaking down the economy as far as Western I/O tables allow. But the data structures it is fed with, textual spreadsheets, whilst suitable for published I/O tables, would be prohibitively wasteful if applied to fully disaggregated economic plans involving millions of products.

For real communist planning one would rst have to build the sort of object oriented database infrastructure that I was concerned with in the 1980s. At the same time the software would have to be re-engineered to work on highly parallel machines. The experimental data given here shows that the algorithms are tractable, implementing them on parallel supercomputers would be an appreciable development project.

5. Green Transition

So far my experiments have been with large synthetic I/O tables with the sparseness factors given by Reifferscheidt’s nlogn empirical growth law. I am now trying to establish a collaborative project to use the problem of a Green Transition as a demonstration of the use of planning in kind. As a starting point I have been collecting and processing input output data and capital stock data for the UK. I have now reached the stage where I can build realistic linear optimisation models of the UK economy at the level of detail given by the UK input output tables.

As a test run I am evaluating a 5 year plan to shift the composition of industry and output to produce a reduction in the trade de cit.

The resulting linear programme in .lp format occupies a file of about 28MB and has about half a million occurences of variables.

Even this aggregated plan seems too large for lp-solve to evaluate in a reasonable time. I have tried running it for a few hours without an answer.

I find this rather disappointing as it means that I will face the rather more challenging task of rewriting the harmony algorithm for planning to handle the additional constraints implied by imports and exports which have so far been left out of consideration in the planning proposals in TNS.

Until this basic framework is working fast, we cannot progress to the creation of the more disaggregated input output tables that we will need for environmental planning. But I am reasonably con dent that with a couple of months more work I will have solved the algorithmic problems.

I would greatly welcome the chance to build this as a collaborative venture with other socialists who have appropriate scientific or economic skills.


[1] AF Cottrell and WP Cockshott. Towards a New Socialism. Spokesman Books, 1993.

[2] Slava Gerovitch. Internyet: why the Soviet Union did not build a nationwide computer network. History and Technology, 24(4):335-350, 2008.

[3] Boris Malinovsky, Anne Fitzpatrick, and Emmanuel Aronie. Pioneers of Soviet computing. Edited by Anne Fitzpatrick. Translated by Emmanuel Aronie. Np: published electronically, 2010.

[4] Francis Spufford. Red Plenty. Faber & Faber, 2010.

[5] WP Cockshott. The persistent store machine. Persistent Programming Research Report, (18), 1985.

[6] WP Cockshott. Building a microcomputer with associative virtual memory. Persistent Programming Research Report, (20):85, 1985. URL

[7] WP Cockshott and PW Foulk. Implementing

128 bit persistent addresses on 80×86 processors. In Security and Persistence, pages 123-136.

Springer London, 1990.

[8] W Paul Cockshott. Design of pomp a persistent object management processor. In Persistent Object Systems, pages 367-376. Springer, 1990.[9] L. von Mises. Socialism: An Economic and Sociological Analysis. Johnathan Cape, 1951.

[10] L. von Mises. Economic calculation in the socialist commonwealth. In F A Hayek, editor, Collectivist Economic Planning. Routledge and Kegan Paul, London, 1935.

[11] L. von Mises. Human Action. Hodge and Company, London, 1949.

[12] F. A. Hayek. The use of knowledge in society. American Economic Review, pages 519-530, 1945.

[13] F. A. Hayek. The Counter-Revolution of Science. The Free Press, New York, 1955.

[14] L.V. Kantorovich. Mathematical Methods of Organizing and Planning Production. Management Science, 6(4):366-422, 1960.

[15] L.V. Kantorovich. The Best Use of Economic Resources. Harvard University Press, 1965.

[16] Oscar Lange. The computer and the market. In Socialism, capitalism and economic growth: essays presented to Maurice Dobb. Cambridge University Press., 1967.

[17] Wassily Leontief. Input-output economics. Scientific American, 185(4), 1951. URL

[18] Henri Aujac. Leontief’s input – output table and the french development plan. Wassily Leontief and Input-Output Economics, pages 294 – 310, 2004.

[19] Walter Radermacher and Carsten Stahmer. Material and energy flow analysis in Germany. Accounting framework, information system, applications. In Environmental accounting in theory and practice, pages 187-211. Springer, 1998.

[20] P. Petrovic. The deviation of production prices from labour values: some methodolog and empirical evidence. Cambridge Journal of Economics, 11:197-210, 1987.

[21] A. M. Shaikh. The empirical strength of the labour theory of value. In R. Bellofiore, editor, Marxian Economics: A Reappraisal, volume 2, pages 225-251. Macmillan, 1998.

[22] W Paul Cockshott, A Cottrell, and GJ Michaelson. Testing Labour Value Theory with in-

put/output tables. Department of Computer Science, University of Strathclyde, 1993.

[23] David Zachariah. Testing the labor theory of value in Sweden., 2004.

[24] David Zachariah. Labour Value and Equalisation of Profit Rates. Indian Development Review, 4(1): 1-21, 2006.

[25] Nils Fröhlich. Labour values, prices of production and the missing equalisation tendency of profit rates: evidence from the German economy. Cambridge journal of economics, 37(5):1107-1126, 2013.

[26] Lefteris Tsoulfidis and Dimitris Paitaridis. Monetary expressions of labor time and market prices: Theory and evidence from China, Japan and Korea. Review of Political Economy, 2016. URL

[27] Piero Sraffa. Production of commodities by means of commodities. Cambridge University Press, Cambridge, 1960.

[28] Michio Morishima. Marx’s economics: A dual theory of value and growth. CUP Archive, 1978.

[29] Michael Reifferscheidt and Paul Cockshott. Average and marginal labour values are: O n log (n) – a reply to Hagendorf. World Review of Political Economy, 5(2):258-275, 2014.

[30] W. P. Cockshott. Application of artificial intelligence techniques to economic planning. Future Computing Systems, 2:429-443, 1990.

[31] János Kornai. Mathematical planning of structural decisions, volume 45. North-Holland, 1975.


[1] A good account of this period is given in the novel Red Plenty.

[2] Eg: Summit – IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband , IBM DOE/SC/Oak Ridge National Laboratory.