Categories

Donate

Advert

Running the Net After a Collapse

I’ve been thinking about what we need in Australia to preserve the free software community in the face of an economic collapse (let’s not pretend that the US can collapse without taking Australia down too). For current practices of using the Internet and developing free software to continue it seems that we need the following essential things:

  1. Decentralised local net access.
  2. Reliable electricity.
  3. Viable DNS in the face of transient outages.
  4. Reasonably efficient email distribution (any message should be delivered in 48 hours).
  5. Distribution of bulk binary data (CD and DVD images of distributions). This includes providing access at short notice.
  6. Mailing lists to provide assistance which are available with rapid turn-around (measured in minutes).
  7. Good access to development resources (test machines).
  8. Reliable and fast access to Wikipedia data.
  9. A secure infrastructure.

To solve these I think we need the following:

  1. For decentralised local net access wireless is the only option at this time. Currently in Australia only big companies such as Telstra can legally install wires that cross property boundaries. While it’s not uncommon for someone to put an Ethernet cable over their fence to play network games with their neighbors or to share net access, this can’t work across roads and is quite difficult when there are intervening houses of people who aren’t interested. When the collapse happens the laws restricting cabling will all be ignored. But it seems that a wireless backbone is necessary to then permit Ethernet in small local areas. There is a free wireless project wireless.org.au to promote and support such developments.
  2. Reliable electricity is only needed for server systems and backbone routers. The current model of having huge numbers of home PCs running 24*7 is not going to last. There are currently a variety of government incentives for getting solar power at home. The cheapest way of doing this is for a grid-connected system which feeds excess capacity into the grid – the down-side of this is that when grid power goes off it shuts down entirely to avoid the risk of injuring an employee of the electricity company. But once solar panels are installed it should not be difficult to convert them to off-grid operation for server rooms (which will be based in private homes). It will be good if home-based wind power generation becomes viable before the crunch comes, server locations powered by wind and solar power will need smaller batteries to operate overnight.
    In the third-world it’s not uncommon to transport electricity by carrying around car batteries. I wonder whether something similar could be done in a post-collapse first-world country with UPS batteries.
    Buying UPSs for all machines is a good thing to do now, when the crunch comes such things will be quite expensive. Also buying energy-efficient machines is a good idea.
  3. DNS is the most important service on the net. The current practice is for client machines to use a DNS cache at their ISP. For operation with severe and ongoing unreliability in the net we would need a hierarchy of caching DNS servers to increase the probability of getting a good response to a DNS request even if the client can’t ping the DNS server in question. One requirement would be the use of DNS cache programs which store their data on disk (so that a transient power outage on a machine which has no UPS won’t wipe out the cache), one such DNS cache is pdnsd [1] (I recall that there were others but I can’t find them at the moment). Even if pdnsd is not the ideal product for such tasks it’s a good proof of concept and a replacement could be written quickly enough.
  4. For reasonably efficient email distribution we would need at a minimum a distribution of secondary MX records. If reliable connections end to end aren’t possible then outbound smart-hosts serving geographic regions could connect to secondary MX servers in the recipient region (similar to the way most mail servers in Melbourne used to use a university server as a secondary MX until the sys-admin of that server got fed up with it and started bouncing such mail). Of course this would break some anti-spam measures and force other anti-spam measures to be more local in scope. But as most spam is financed from the US it seems likely that a reduction in spam will be one positive result of an economic crash in the US.
    It seems likely that UUCP [2] will once again be used for a significant volume of email traffic. It is a good reliable way of tunneling email over multiple hops between hosts which have no direct connectivity.
    Another thing that needs to be done to alleviate the email problem is to have local lists which broadcast traffic from major international lists, this used to be a common practice in the early 90’s but when bandwidth increased and the level of clue on the net decreased the list managers wanted more direct control to allow them to remove bad addresses more easily.
  5. Distributing bulk data requires local mirrors. Mirroring the CD and DVD images of major distributions is easy enough. Mirroring other large data (such as the talks from www.ted.com) will be more difficult and require more manual intervention. Fortunately 750G disks are quite cheap nowadays and we can only expect disks to become larger and cheaper in the near future. Using large USB storage devices to swap data at LUG meetings is a viable option (we used to swap data on floppy disks many years ago). Transferring CD images over wireless links is possible, but not desirable.
  6. Local LUG mailing lists are a very important support channel, the quality and quantity of local mailing lists is not as high as it might be due to the competition with international lists. But if a post to an international list could be expected to take five days to get a response then there would be a much greater desire to use local lists. Setting up local list servers is not overly difficult.
  7. Access to test machines is best provided by shared servers. Currently most people who do any serious free software development have collections of test machines. But restrictions in the electricity supply will make this difficult. Fortunately virtualisation technologies are advancing well, much of my testing could be done with a few DomU’s on someone else’s server (I already do most of my testing with local DomU’s which has allowed me to significantly decrease the amount of electricity I use).
    Another challenge with test machines is getting the data. At the moment if I want to test my software on a different distribution it’s quite easy for me to use my cable link to download a DVD image. But when using a wireless link this isn’t going to work well. Using ssh to connect to a server that someone else runs over a wireless link would be a much more viable option than trying to download the distribution DVD image over wireless!
  8. Wikipedia.org is such an important resource that access to it provides a significant difference in the ability to perform many tasks. Fortunately they offer CD images of Wikipedia which can be downloaded and shared [3].
  9. I believe that it will be more important to have secure computers after the crunch, because there will be less capacity for overhead. Currently when extra hardware is required due to DOS attacks and systems need to be reinstalled we have the resources to do so. Due to the impending lack of resources we need to make things as reliable as possible so that they don’t need to be fixed as often, this requires making computers more secure.

2 comments to Running the Net After a Collapse

  • Don’t forget distributed revision control systems. If you keep projects in git you’re less reliant on far-away data centers.

  • Don: When I started working on free software development projects everything was based around emailing the output of diff. While there are now things like git, the majority of projects and the majority of developers still rely on using email with diff output.

    While git is apparently doing a lot of good for the key kernel developers, I don’t think it’s a required technology.

    Even with git, there are patch dependencies, for example every time I read one of Andrew Morton’s summaries it seems to include some patches being left out due to conflicts or dependencies.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>