Kelsey Hightower: Spinnaker is a Standard Library for software delivery

We just did a great panel with Alan Shimmel of DevOps.com, along with:

  • Sarah Novotny, Open Source in Microsoft’s Azure Office of the CTO
  • Kelsey Hightower, Staff Developer Advocate at Google
  • Andy Glover, Director of Productivity Engineering at Netflix
  • Isaac Mosquera, CTO at Armory and DROdio, CEO at Armory

Kelsey described Spinnaker as a “Standard Library” for software delivery, instead of piecing together software delivery tooling yourself.



In the video we discuss:

  • Minute 2:35: Leaders in the Spinnaker community from Netflix, Microsoft and Google, talking about thems around enterprise software delivery
  • Minute 4:15: How enterprises are moving beyond the Spinnaker use-case, and the investments IaaS providers are making in the cloud
  • Minute 5:55: Discovery’s use case example — an ability to deploy with safety and security to multiple cloud targets
  • Minute 8:01: Being able to deploy code “when it’s ready” — if it’s ready in 6 months, being able to deploy in “6 months and two minutes”
  • Minute 9:18: Delivery is just one of many steps in software delivery
  • Minute 10:20: How long does it take to create a platform like Spinnaker?
  • Minute 12:45: How “tall do you need to be to ride the ride?” Discussing levels of sophistication to leverage the value from a project like Spinnaker
  • Minute 15:03: How Spinnaker is a platform that enables companies to get both safety and velocity — and prioritize developer value creation potential
  • Minute 16:20: How Spinnaker enables trust within an organization to achieve not just continuous delivery, but continuous improvement
  • Minute 19:30: What companies are and are not a good fit for Spinnaker
  • Minute 21:40: How “8 apps” is a tipping point
  • Minute 22:18: How Spinnaker acts as a “Standard Library” for software delivery, instead of piecing together software delivery tooling yourself
  • Minute 26:20: Sharing best-practices across all users by using Spinnaker, due to its OSS and modular plug-in nature
  • Minute 29:40: The “Spinnaker and” approach vs. “Spinnaker or”.
  • Minute 31:10: How to leverage the opinionated nature of Spinnaker while leveraging it as a set of core building blocks:  — for example, “blue/green” deployments can mean something different across teams and companies
  • Minute 38:30: Spinnaker is OSS — focus your effort on helping make it better, instead of re-building your tooling in-house
  • Minute 41:05: Focus on the bigger picture — throwing the birthday party vs. baking the cake

Dr. Phil Maffetone’s MAF Method for Athletic Performance

Devised by Dr. Philip Maffetone based on 40 years of clinical and scientific research, the MAF Method helps walkers, runners, cyclists and elite athletes of all ages and ability to reach their full human potential.

The method is focussed on exercise, nutrition and stress – the 3 forces that exert the most influence to achieving optimal health and fitness.

Companies pledge to donate at least 10% of profits to effective charities

Companies pledge to give at least ten percent of their profits to effective charities, following the lead of the Giving What We Can community.

The following article is reprinted from the press release announcing Giving What We Can’s Company Pledge.


Giving What We Can, a community of people who have pledged to give a significant portion of their income to improving the lives of others, today announced the founding members of its Company Pledge. These companies have each pledged to give at least 10% of profits to effective charities.

There are four companies who have taken the Company Pledge: Australian trade services provider Give Industries, European impact-streetwear brand Studio-1X, American meditation app Waking Up, and Australian technology research & venture builder ISOLABS.

“It seems a simple concept, but the results are undeniable,” says Calvin Baker, co-founder of Give Industries. “By pledging our company profits in support of evidence-based global top performers in promoting human and animal well-being, we’re enabling an amount of good that would be otherwise unachievable by a small business. This purpose has helped us to grow a passionate and motivated workforce.”

“Our mission is to use the power of capitalism as an engine for good,” adds Ida Josefiina of Studio-1X. “We’re advocating for the Small and Medium Enterprises (SME) business community to use their resources and platforms to drive social impact both in terms of awareness, and financial contributions.”

These companies are joining over 5,000 individuals from more than 80 countries who are all united by a commitment to helping others. Together the members of Giving What We Can have donated at least $195 million, and have pledged a further $1.8 billion.

According to Giving What We Can co-founder, Dr. Toby Ord, “Research shows that the best charities can have at least ten times the social impact of the typical charity, and hundreds of times as much as less effective charities. By finding outstanding giving opportunities we can make a significant difference to many more lives than we otherwise would.”

“It’s about living a better life in the world and this requires that one be integrated with society in a way that produces good effects not just for oneself and one’s family and friends but for people one may never meet. I want to help inspire other businesses to do the same. There’s no question that discussing these things in public can help inspire others to rethink their relationship to money and to give more of it to the worthy causes.” Sam Harris, founder of the Waking Up meditation app

Josefiina emphasises that there is a lot of opportunity for small and medium-sized enterprises to make an impact: “SMEs make up roughly 90% of the global business economy. Activating this segment to participate in solving global problems would have a major impact in the world, and therefore should be seen as a priority.”

These businesses are pushing for change beyond just donating. Give Industries, which is donating 100% of their profits, states: “We’re pretty serious about creating change, both globally and right under our noses. We’re committed to making sure our business is carbon neutral, gender equitable, and employs people often overlooked by the job market.”

“Our aim is to apply advanced technology to important social problems. To maximise our impact, and ensure that no matter what we’re working on we can make a difference, we will be donating to effective charities from day one. We hope to show others that donating even at an early-stage can be commercially viable,.” says Casey Lickfold, co-founder of ISOLABS.

Giving What We Can seeks to normalise giving more and giving more effectively – to inspire a culture of generosity and a world full of flourishing.


Postscript

  • The Company Pledge is still in its infancy, if your company would like to pledge and you would like to be involved in shaping this please contact us.
  • The Company Pledge is different to Founders Pledge which is aimed at the founders of high growth startups who anticipate a liquidity event. Founders Pledge is an excellent organisation and they provide a pledge which we still enthusiastically recommend for founders for whom the pledge is suitable.

Share this

California appeals court rules Uber, Lyft must reclassify drivers as employees

(Reuters) – A California appeals court on Thursday unanimously ruled against ride-hailing companies Uber Technologies Inc UBER.N and Lyft Inc LYFT.O, saying they must reclassify their drivers in the state as employees.

FILE PHOTO: A sign marks a rendezvous location for Lyft and Uber users at San Diego State University in San Diego, California, U.S., May 13, 2020. REUTERS/Mike Blake/File Photo

While the ruling does not take effect before a Nov. 3 company-sponsored ballot measure that will give voters the chance to decide over the future status of gig workers, it narrows the companies’ options should their ballot fail.

The case emerged after California implemented a law, known as AB5, aimed at reclassifying ride-hail, food delivery and other app-based workers as employees entitled to benefits such as unemployment insurance and minimum wage.

California in May sued Uber and Lyft for not complying with AB5. A California judge in August ordered the companies to reclassify their drivers as employees, a ruling the companies appealed under the threat of leaving the state altogether.

The appeals court on Thursday upheld the ruling.

The judges said in a 74-page ruling that Uber’s and Lyft’s misclassification caused irreparable harm to drivers who as independent contractors miss out on employee benefits.

Remedying those harms more strongly served the public interest than “protecting Uber, Lyft, their shareholders, and all of those who have come to rely on the advantages of online ride-sharing,” the ruling said.

Lyft and Uber in a statement said they were considering all legal options, including an appeal.

“This ruling makes it more urgent than ever for voters to stand with drivers and vote yes on Prop. 22,” Lyft said, referring to the Nov. 3 ballot measure, which would repeal AB5 and provide drivers with more limited benefits.

“Today’s ruling means that if the voters don’t say Yes on Proposition 22, rideshare drivers will be prevented from continuing to work as independent contractors, putting hundreds of thousands of Californians out of work and likely shutting down ridesharing throughout much of the state,” Uber said.

Reporting by Kanishka Singh in Bangaluru and Tina Bellon in New York; Editing by Daniel Wallis

Egypt Blocks Access to Telegram

Technology and Law Community “Masaar” documented the blocking of the Telegram website and application by the Egyptian authorities on 22 October 2020. The authorities blocked Telegram on three of the Internet service networks operating in Egypt. These networks included “We”, “Vodafone” and “Orange”. Masaar learned about the blocking action after complaints from several users of the Internet services on the three networks, stating that they cannot access the application or the website. It should be noted that Telegram is one of the most popular and widespread encrypted chat applications in the world.

Masaar confirms that users of the three networks: We (AS8452), Orange (AS24863) and Vodafone (AS36935) cannot use Telegram application on smartphones, as the Egyptian authorities have blocked access to IP addresses of the application.

The authorities also have blocked the Telegram website itself (telegram.org) and the version of Telegram used for desktop computers (web.telegram.org). Further, the blocking involved ADSL and mobile internet (4G / 3G).

It is noteworthy that last September “Masaar” had published a web page declaring that the authorities have blocked 596 websites and 32 alternative links since May 2017.

Root, an insurtech targeting a $1bn+ IPO, uses ILS for aggregate reinsurance

Root Insurance, the auto insurtech company, has said in its initial public offering (IPO) documentation that it uses insurance-linked securities (ILS) backed reinsurance capacity to help it manage volatility in its business.

Root Insurance is looking to raise just over $1 billion, including concurrent private placements related to the IPO, which could value the insurtech at more than $6.3 billion once it’s all tied up.

High-growth insurance technology (insurtech) start-ups are attracting stunning valuations, thanks in part to the appetite of venture investors around the world.

It’s driven a need to IPO quickly and in a very big way, for some, as insurtech’s look to deliver the kind of high-growth that investors will demand going forwards, which means they require robust capital backing and frameworks, part of which is their reinsurance arrangements.

Root Insurance details some of its reinsurance arrangements in its IPO filing, with quota share arrangements seemingly a significant lever for its business model, as it leans on reinsurers appetites for these premium, loss and profit sharing arrangements.

But Root has also tapped into the insurance-linked securities (ILS) market, with excess of loss reinsurance funded through a platform that fully collateralizes its reinsurance obligations to the insurtech.

Reinsurance is core to insurtech’s like Root, particularly as it has a full-stack carrier approach, which means it controls more of its own destiny and also has the regulatory means to optimise its capital structure, saying that it adopts a capital light approach.

Access to reinsurance capital, in all its forms, is one way to keep capital light for full-stack insurtech’s like Root, meaning they can adapt to market conditions and also expand their appetites with the support of reinsurance capital sitting behind them.

Hence the quota share is a great tool for a full-stack insurtech like Root, providing it with added underwriting capacity to enable it to write more business, while a share of its losses are absorbed, so moderating volatility within its operating results.

That’s good for its investors, who will be looking for stable growth, again something a quota share strategy can help to deliver.

Root aims for top-line growth without having to increase its regulatory capital requirements.

The insurtech explains, “Net of third-party reinsurance, over the long term we expect this structure to enable us to write at least four dollars of net retained premium for each one dollar of capital held across Root Insurance Company and our wholly-owned, Cayman Islands-based captive reinsurer, Root Re. While our reinsurance activities cede a portion of the profit, we expect the net impact to be highly accretive to us on a return basis.”

Root actually increased is quota share percentage in the third-quarter, taking it up to 70% of premiums that are shared with third-party reinsurance partners, which has had the effect of reducing its revenues but over time would be expected to help it grow faster and moderate volatility even more.

All of which makes customer acquisition and marketing more important for insurtech’s following this approach, as the more customers they can bring in through the front-door, while reinsurers are taking their share through quota shares, the more potentially sticky client relationships can be developed that a company like Root can benefit from over the longer-term.

As an investor in insurtech’s, that’s why it’s important to look at the retention rates of clients, as well the onboarding and eventual loss ratios.

But, being auto insurance focused, Root also has exposure to volatility caused by catastrophe losses as well and so explains that excess of loss reinsurance is also vital to help it protect against large or unanticipated losses.

On the severity side, Root taps major global reinsurers for excess of loss protection, but on the frequency side, which is perhaps more relevant to an auto insurer as it grows and faces impacts from severe storms and hail events, Root taps the ILS market.

“We purchase frequency / aggregate protection against 100% of our net retained premium base from a reinsurance platform that fully collateralizes its potential obligations to us via funding achieved in the insurance-linked security marketplace,” Root explains.

But, while the use of ILS capacity to support its aggregate reinsurance needs is clearly most relevant to our ILS focused audience, for Root it is the quota share model that is helping to support its growth.

“We expect to maintain this target level of third-party quota share reinsurance while rapidly growing our business in order to operate a capital light business model and mitigate market volatility. As our business scales, we expect to have the flexibility to reduce our quota share levels to maximize the return to shareholders,” the insurtech said.

That’s key as Root Insurance looks to raise this new round of funding, as the stability this strategy of layered quota shares, augmented with traditional and ILS excess of loss capacity on both an occurrence and aggregate basis can help the insurtech to deliver greater stability in its results, while also fuelling its growth.

Print Friendly, PDF & Email

Internet Ascendant, Part 2: Going Private and Going Public

In the summer of 1986, Senator Al Gore, Jr., of Tennessee introduced an amendment to the Congressional Act that authorized the  budget of the National Science Foundation (NSF). He called for the federal government to study the possibilities for “communications networks for supercomputers at universities and Federal research facilities.” To explain the purpose of this legislation, Gore called on a striking analogy: 

One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide. America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need a telecommunications highway connecting users coast to coast, state to state, city to city. The study required in this amendment will identify the problems and opportunities the nation will face in establishing that highway.1

In the following years, Gore and his allies would call for the creation of an “information superhighway”, or, more formally, a national information infrastructure (NII). As he intended, Gore’s analogy to the federal highway system summons to mind a central exchange that would bind together various local and regional networks, letting all American citizens communicate with one another. However, the analogy also misleads – Gore did not propose the creation of a federally-funded and maintained data network. He envisioned that the information superhighway, unlike its concrete and asphalt namesake, would come into being through the action of market forces, within a regulatory framework that would ensure competition, guarantee open, equal access to any service provider (what would later be known as “net neutrality”), and provide subsidies or other mechanisms to ensure universal service to the least fortunate members of society, preventing the emergence of a gap between the information rich and information poor.2

Over the following decade, Congress slowly developed a policy response to the growing importance of computer networks to the American research community, to education, and eventually to society as a whole. Congress’ slow march towards an NII policy, however, could not keep up with the rapidly growing NSFNET, overseen by the neighboring bureaucracy of the executive branch. Despite its reputation for sclerosis, bureaucracy was created exactly because of its capacity, unlike a legislature, to respond to events immediately, without deliberation. And so it happened that, between 1988 and 1993, the NSF crafted the policies that would determine how the Internet became private, and thus went public. It had to deal every year with novel demands and expectations from NSFNET’s users and peer networks. In response, it made decisions on the fly, decisions which rapidly outpaced Congressional plans for guiding the development of an information superhighway. These decisions rested largely in the hands of a single man – Stephen Wolff.

Acceptable Use

Wolff earned a Ph.D. in electrical engineering at Princeton in 1961 (where he would have been a rough contemporary of Bob Kahn), and began what might have been a comfortable academic career, with a post-doctoral stint at Imperial College, followed by several years teaching at Johns Hopkins. But then he shifted gears, and took a position  at the Ballistics Research lab in Aberdeen, Maryland. He stayed there for most of the 1970s and early 1980s, researching communications and computing systems for the U.S. Army. He introduced Unix into the lab’s offices, and managed Aberdeen’s connection to the ARPANET.3

In 1986, the NSF recruited him to manage the NSF’s supercomputing backbone – he was a natural fit, given his experience connecting Army supercomputers to ARPANET. He became the principal architect of NSFNET’s evolution from that point until his departure in 1994, when he entered the private sector as a manager for Cisco Systems. The original intended function of the net that Wolff was hired to manage had been to connect researchers across the U.S. to NSF-funded supercomputing centers. As we saw last time, however, once Wolff and the other network managers saw how much demand the initial backbone had engendered, they quickly developed a new vision of NSFNET, as a communications grid for the entire American research and post-secondary education community.

However, Wolff did not want the government to be in the business of supplying network services on a permanent basis. In his view, the NSF’s role was to prime the pump, creating the initial demand needed to get a commercial networking services sector off the ground. Once that happened, Wolff felt it would be improper for a government entity to be in competition with viable for-profit businesses. So he intended to get NSF out of the way by privatizing the network, handing over control of the backbone to unsubsidized private entities and letting the market take over.

This was very much in the spirit of the times. Across the Western world, and across most of the political spectrum, government leaders of the 1980s touted privatization and deregulation as the best means to unleash economic growth and innovation after the relative stagnation of the 1970s. As one example among many, around the same time that NSFNET was getting off the ground, the FCC knocked down several decades-old constraints on corporations involved in broadcasting. In 1985, it removed the restriction on owning print and broadcast media in the same locality, and two year later it nullified the fairness doctrine, which had required broadcasters to present multiple views on public-policy debates. 

From his post at NSF, Wolff had several levers at hand for accomplishing his goals. The first lay in the interpretation and enforcement of the network’s acceptable use policy (AUP). In accordance with NSF’s mission, the initial policy for the NSFNET backbone, in effect until June 1990, required all uses of the network to be in support of “scientific research and other scholarly activities.” This is quite restrictive indeed, and would seem to eliminate any possibility of commercial use of the network. But Wolff chose to interpret the policy liberally. Regularly mailing list postings about new product releases from a corporation that sold data processing software – was that not in support of scientific research? What about the decision to allow MCI’s email system to connect to the backbone, at the urging of Vint Cerf, who had left government employ to oversee the development of MCI Mail. Wolff rationalized this – and other later interconnections to commercial email systems such as CompuServe’s – as in support of research by making it possible for researchers to communicate digitally with a wider range of people that they might need to contact in the pursuit of their work. A stretch, perhaps. But Wolff saw that allowing some commercial traffic on the same infrastructure that was used for public NSF traffic would encourage the private investment needed to support academic and educational use on a permanent basis. 

Wolff’s strategy of opening the door of NSFNET as far as possible to commercial entities got an assist from Congress in 1992, when Congressman Rick Boucher, who helped oversee NSF as chair of the Science Subcommittee, sponsored an amendment to the NSF charter which authorized any additional uses of NSFNET that would “tend to increase the overall capabilities of the networks to support such research and education activities.” This was an ex post facto validation of Wolff’s approach to commercial traffic, allowing virtually any activity as long as it produced profits that encouraged more private investment into NSFNET and its peer networks.  

Dual-Use Networks

Wolff also fostered the commercial development of networking by supporting the regional networks’ reuse of their networking hardware for commercial traffic. As you may recall, the NSF backbone linked together a variety of not-for-profit regional nets, from NYSERNet in New York to Sesquinet in Texas to BARRNet in northern California. NSF did not directly fund the regional networks, but it did subsidize them indirectly, via the money it provided to labs and universities to offset the costs of their connection to their neighborhood regional net. Several of the regional nets then used this same subsidized infrastructure to spin off a for-profit commercial enterprise, selling network access to the public over the very same wires used for the research and education purposes sponsored by NSF. Wolff encouraged them to do so, seeing this as yet another way to accelerate the transition of the nation’s research and education infrastructure to private control. 

This, too, accorded neatly with the political spirit of the 1980s, which encouraged private enterprise to profit from public largesse, in the expectation that the public would benefit indirectly through economic growth. One can see parallels with the dual-use regional networks in the 1980 Bayh-Dole Act, which defaulted ownership of patents derived from government-funded research to the organization performing the work, not to the government that paid for it. 

The most prominent example of dual-use in action was PSINet, a for-profit company initially founded as Performance Systems International in 1988. William Schrader and Martin Schoffstall, the co-founder of NYSERNet and one of vice presidents’, respectively, created the company. Schofstall, a former BBN engineer and co-author of the Simple Network Management Protocol (SNMP) for managing the devices on an IP network, was the key technical leader. Schrader, an ambitious Cornell biology major and MBA who had helped his alma mater set up its supercomputing center and get it connected to NSFNET, provided the business drive. He firmly believed that NYSERNet should be selling service to businesses, not just educational institutions. When the rest of the board disagreed, he quit to found his own company, first contracting with NYSERNet for service, and later raising enough money to acquire its assets. PSINet thus became one of the earliest commercial internet service providers, while continuing to provide non-profit service to colleges and universities seeking access to the NSFNET backbone.4

Wolff’s final source of leverage for encouraging a commercial Internet lay in his role as manager of the contracts with the Merit-IBM-MCI consortium that operated the backbone. The initial impetus for change in this dimension came not from Wolff, however, but from the backbone operators themselves.  

A For-Profit Backbone

MCI and its peers in the telecommunications industry had a strong incentive to find or create more demand for computer data communications. They had spent the 1980s upgrading their long-line networks from coaxial cable and microwave – already much higher capacity than the old copper lines – to fiber optic cables. These cables, which transmitted laser light through glass, had tremendous capacity, limited mainly by the technology in the transmitters and receivers on either end, rather than the cable itself. And that capacity was far from saturated. By the early 1990s, many companies had deployed OC-48 transmission equipment with 2.5 Gbps of capacity, an almost unimaginable figure a decade earlier. An explosion in data traffic would therefore bring in new revenue at very little marginal cost – almost pure profit.5

The desire to gain expertise in the coming market in data communications helps explains why MCI was willing to sign on to the NSFNET bid proposed by Merit, which massively undercut the competing bids (at $14 million for five years, versus the $40 and $25 millions proposed by their competitors6), and surely implied a short-term financial loss for MCI and IBM. But by 1989, they hoped to start turning a profit from their investment. The existing backbone was approaching the saturation point, with 500 million packets a month, a 500% year-over-year increase.7 So, when NSF asked Merit to upgrade the backbone from 1.5 Mbps T1 lines to 45Mbps T3, they took the opportunity to propose to Wolff a new contractual arrangement.

T3 was a new frontier in networking – no prior experience or equipment existed for digital networks of this bandwidth, and so the companies argued that more private investment would be needed, requiring a restructuring that would allow IBM and Merit to share the new infrastructure with for-profit commercial traffic – a dual-use backbone. To achieve this, the consortium would from a new non-profit corporation, Advanced Network & Services, Inc. (ANS), which would supply T3 networking services to NSF. A subsidiary called ANS CO+RE systems would sell the same services at a profit to any clients willing to pay. Wolff agreed to this, seeing it as just another step in the transition of the network towards commercial control. Moreover, he feared that continuing to block commercial exploitation of the backbone would lead to a bifurcation of the network, with suppliers like ANS doing an end-run around NSFNET to create their own, separate, commercial Internet. 

Up to that point, Wolff’s plan for gradually getting NSF out of the way had no specific target date or planned milestones. A workshop on the topic held at Harvard in March 1990, in which Wolff and many other early Internet leaders participated, considered a variety of options without laying out any concrete plans.8 It was ANS’ stratagem that triggered the cascade of events that led directly to the full privatization and commercialization of NSFNET.

It began with a backlash. Despite Wolff’s good intentions, IBM and MCI’s ANS maneuver created a great deal of disgruntlement in the networking community. It became a problem exactly because of the for-profit networks attached to the backbone that Wolff had promoted. So far they had gotten along reasonably with one another, because they all operated as peers on the same terms. But with ANS, a for-profit company held a de-facto monopoly on the backbone at the center of the Internet.9 Moreover, despite Wolff’s efforts to interpret the AUP loosely, ANS chose to interpret it strictly, and refused to interconnect the non-profit portion of the backbone (for NSF traffic) with any of their for-profit networks like PSI, since that would require a direct mixing of commercial and non-commercial traffic. When this created an uproar, they backpedaled, and came up with a new policy, allowing interconnection for a fee based on traffic volume.

PSINet would have none of this. In the summer of 1991, they banded together with two other for-profit Internet service providers – UUNET, which had begun by selling commercial access to Usenet before adding Internet service; and the California Education and Research Federation Network, or CERFNet, operated by General Atomics – to form their own exchange, bypassing the ANS backbone. The Commercial Internet Exchange (CIX) consisted at first of just a single routing center in Washington D.C. which could transfer traffic among the three networks. They agreed to peer at no charge, regardless of the relative traffic volume, with each network paying the same fee to CIX to operate the router. New routers in Chicago and Silicon Valley soon followed, and other networks looking to avoid ANS’ fees also joined on.

Divestiture

Rick Boucher, the Congressman whom we met above as a supporter of NSF commercialization, nonetheless requested an investigation of the propriety of Wolff’s actions in the ANS affair by the Office of the Inspector General. It found NSF’s actions precipitous, but not malicious or corrupt. Nevertheless, Wolff saw that the time had come to divest control of the backbone. With ANS + CORE and CIX privatization and commercialization had begun in earnest, but in a way that risked splitting the unitary Internet into multiple disconnected fragments, as CIX and ANS refused to connect with one another. NSF therefore drafted a plan for a new, privatized network architecture in the summer of 1992, released it for public comment, and finalized it in May of 1993. NSFNET would shut down in the spring of 1995, and its assets would revert to IBM and MCI. The regional networks could continue to operate, with financial support from the NSF gradually phasing out over a four year period, but would have to contract with a private ISP for internet access.

But in a world of many competitive internet access providers, what would replace the backbone? What mechanism would link these opposed private interests into a cohesive whole? Wolff’s answer was inspired by the exchanges already built by cooperatives like CIX – NSF would contract out the creation of four Network Access Points (NAPs), routing sites where various vendors could exchange traffic. Having four separate contracts would avoid repeating the ANS controversy, by preventing a monopoly on the points of exchange. One NAP would reside at the pre-existing, and cheekily named, Metropolitan Area Ethernet East (MAE-East) in Vienna, Virginia, operated by Metropolitan Fiber Systems (MFS). MAE-West, operated by Pacific Bell, was established in San Jose, California; Sprint operated another NAP in Pennsauken, New Jersey, and Ameritech one in Chicago. The transition went smoothly10, and NSF decommissioned the backbone right on schedule, on April 30, 1995.11

The Break-up

Though Gore and others often invoked the “information superhighway” as a metaphor for digital networks, there was never serious consideration in Congress of using the federal highway system as a direct policy model. The federal government paid for the building and maintenance of interstate highways in order to provide a robust transportation network for the entire country. But in an era when both major parties took deregulation and privatization for granted as good policy, a state-backed system of networks and information services on the French model of Transpac and Minitel was not up for consideration.12

Instead, the most attractive policy model for Congress as it planned for the future of telecommunication was the long-distance market created by the break-up of the Bell System between 1982 and 1984. In 1974, the Justice Department filed suit against AT&T, its first major suit against the organization since the 1950s, alleging that it had engaged in anti-competitive behavior in violation of the Sherman Antitrust Act. Specifically, they accused the company of using its market power to exclude various innovative new businesses from the market – mobile radio operators, data networks, satellite carriers, makers of specialized terminal equipment, and more. The suit thus clearly drew much of its impetus from the ongoing disputes since the early 1960s (described in an earlier installment), between AT&T and the likes of MCI and Carterfone.

When it became clear that the Justice Department meant business, and intended to break the power of AT&T, the company at first sought redress from Congress. John de Butts, chairman and CEO since 1972, attempted to push a “Bell bill” – formally the Consumer Communications Reform Act – through Congress. It would have enshrined into law AT&T’s argument that the benefits of a single, universal telephone network far outweighed any risk of abusive monopoly, risks which in any case the FCC could already effectively check. But the proposal received stiff opposition in the House Subcommittee on Communications, and never reached a vote on the floor of either Congressional chamber. 

In a change of tactics, in 1979 the board replaced the combative de Butts – who had once declared openly to an audience of state telecommunications regulators the heresy that he opposed competition and espoused monopoly – with the more conciliatory Charles Brown. But it was too late by then to stop the momentum of the antitrust case, and it became increasingly clear to the company’s leadership that they would not prevail. In January 1982, therefore, Brown agreed to a consent decree that would have the presiding judge in the case, Harold Greene, oversee the break-up of the Bell System into its constituent parts.

The various Bell companies that brought copper to the customer’s premise, which generally operated by state (New Jersey Bell, Indiana Bell, and so forth) were carved up into seven blocks called Regional Bell Operating Companies (RBOCs). Working clockwise around the country, they were NYNEX in the northeast, Bell Atlantic, Bell South, Southwestern Bell, Pacific Telesis, US West, and Ameritech. All of them remained regulated entities with an effective monopoly over local traffic in their region, but were forbidden from entering other telecom markets. 

AT&T itself retained the “long lines” division for long-distance traffic. Unlike local phone service, however, the settlement opened this market to free competition from any entrant willing and able to pay the interconnection fees to transfer calls in and out of the RBOCs. A residential customer in Indiana would always have Ameritech as their local telephone company, but could sign up for long-distance service with anyone.

However, splitting apart the local and long-distance markets meant forgoing the subsidies that AT&T had long routed to rural telephone subscribers, under-charging them by over-charging wealthy long-distance users. A sudden spike in rural telephone prices across the nation was not politically tenable, so the deal preserved these transfers via a new organization, the non-profit National Exchange Carrier Association, which collected fees from the long-distance companies and distributed them to the RBOCS.   

The new structure worked. Two major competitors entered the market in the 1980s, MCI and Sprint, and cut deeply into AT&T’s market share. Long-distance prices fell rapidly. Though it is arguable how much of this was due to competition per se, as opposed to the advent of ultra-high-bandwidth fiber optic networks, the arrangement was generally seen as a great success for de-regulation and a clear argument for the power of market forces to modernize formerly hidebound industries. 

This market structure, created ad hoc by court fiat but evidently highly successful, provided the template from which Congress drew in the mid-1990s to finally resolve the question of what telecom policy for the Internet era would look like. 

Second Time Isn’t The Charm

Prior to the main event, there was one brief preliminary. The High Performance Computing Act of 1991 was important tactically, but not strategically. It advanced no new major policy initiatives. Its primary significance lay in providing additional funding and Congressional backing for what Wolff and the NSF already were doing and intended to keep doing – providing networking services for the research community, subsidizing academic institutions’ connections to NSFNET, and continuing to upgrade the backbone infrastructure.  

Then came the accession of the 104th Congress in January 1995. Republicans took control of both the Senate and the House for the first time in forty years, and they came with an agenda to fight crime, cut taxes, shrink and reform government, and uphold moral righteousness. Gore and his allies had long touted universal access as a key component of the National Information Infrastructure, but with this shift in power the prospects for a strong universal service component to telecommunications reform diminished from minimal to none. Instead, the main legislative course would consist of regulatory changes to foster competition in telecommunications and Internet access, with a serving of bowdlerization on the side. 

The market conditions looked promising. Circa 1992, the major players in the telecommunications industry were numerous. In the traditional telephone industry there were the seven RBOCs, GTE, and three large long distance companies – AT&T, MCI, and Sprint – along with many smaller ones. The new up-and-comers included Internet service providers, such as UUNET, and PSINET as well as the IBM/MCI backbone spin-off, ANS; and other companies trying to build out their local fiber networks, such as Metropolitan Fiber Systems (MFS). BBN, the contractor behind ARPANET, had begun to build its own small Internet empire, snapping up some of the regional networks that orbited around NSFNET – Nearnet in New England, BARRNet in the Bay area, and SURANet in the southeast of the U.S. 

To preserve and expand this competitive landscape would be the primary goal of the 1996 Telecommunications Act, the only major rewrite of communications policy since the Communications Act of 1934. It intended to reshape telecommunications law for the digital age. The regulatory regime established by the original act siloed industries by their physical transmission medium – telephony, broadcast radio and television, cable TV; in each in its own box, with its own rules, and generally forbidden to meddle in each other’s business. As we have seen, sometimes regulators even created silos within silos, segregating the long-distance and local telephone markets. This made less and less sense as media of all types were reduced to fungible digital bits, which could be commingled on the same optical fiber, satellite transmission, or ethernet cable. 

The intent of the 1996 Act, shared by Democrats and Republicans alike, was to tear down these barriers, these “Berlin Walls of regulation”, as Gore’s own summary of the act put it.13 A complete itemization of the regulatory changes in this doorstopper of a bill is not possible here, but a few examples provide a taste of its character. Among other things it:

  • allowed the RBOCs to compete in long-distance telephone markets,
  • lifted restrictions forbidding the same entity from owning both broadcasting and cable services,
  • axed the rules that prevented concentration of radio station ownership.

The risk, though, of simply removing all regulation, opening the floodgates and letting any entity participate in any market, was to recreate AT&T on an even larger scale, a monopolistic megacorp that would dominate all forms of communication and stifle all competitors. Most worrisome of all was control over the so-called last mile – from the local switching office to the customer’s home or office. Building an inter-urban network connecting the major cities of the U.S. was expensive but not prohibitive, several companies had done so in recent decades, from Sprint to UUNET. To replicate all the copper or cable to every home in even one urban area, was another matter. Local competition in landline communications had scarcely existed since the early wildcat days of the telephone, when tangled skeins of iron wire criss-crossed urban streets. In the case of the Internet, the concern centered especially on high-speed, direct-to-the-premises data services, later known as broadband. For years, competition had flourished among dial-up Internet access providers, because all the end user required to reach the provider’s computer was access to a dial tone. But this would not be the case by default for newer services that did not use the dial telephone network. 

The legislative solution to this conundrum was to create the concept of the “CLEC” – competitive local exchange carrier. The RBOCs, now referred to as “ILECs” (incumbent local exchange carriers), would be allowed full, unrestricted access to the long-distance market only once the had unbundled their networks by allowing the CLECs, which would provide their own telecommunications services to homes and businesses, to interconnect with and lease the incumbents’ infrastructure. This would enable competitive ISPs and other new  service providers to continue to get access to the local loop even when dial-up service became obsolete – creating, in effect, a dial tone for broadband. The CLECs, in this model, filled the same role as the long-distance providers in the post-break-up telephone market. Able to freely interconnect at reasonable fees to the existing local phone networks, they would inject competition into a market previously dominated by the problem of natural monopoly. 

Besides the creation of the CLECS, the other major part of the bill that affected the Internet addressed the Republicans’ moral agenda rather than their economic one. Title V, known as the Communications Decency Act, forbade the transmission of indecent or offensive material – depicting or describing “sexual or excretory activities or organs”, on any part of the Internet accessible to minors. This, in effect, was an extension of the obscenity and indecent rules that governed broadcasting into the world of interactive computing services. 

How, then, did this sweeping act fare in achieving its goals? In most dimensions it proved a failure. Easiest to dispose with is the Communications Decency Act, which the Supreme Court struck down quickly (in 1997) as a violation of the First Amendment. Several parts of Title V did survive review however, including Section 230, the most important piece of the entire bill for the Internet’s future. It allows websites that host user-created content to exist without the fear of constant lawsuits, and protects the continued existence of everything from giants like Facebook and Twitter to tiny hobby bulletin boards. 

The fate of the efforts to promote competition within the local loop took longer to play out, but proved no more successful than the controls on obscenity. What about the CLECs, given access to the incumbent cable and telephone infrastructure so that they could compete on price and service offerings? The law required FCC rulemaking to hash out the details of exactly what kind of unbundling had to be offered. The incumbents pressed the courts hard to dispute any such ruling that would open up their lines to competition, repeatedly winning injunctions on the FCC, while threatening that introducing competitors would halt their imminent plans for bringing fiber to the home. 

Then, with the arrival of the Bush Administration and new chairman Michael Powell in 2001, the FCC became actively hostile to the original goals of the Telecommunications Act. Powell believed that the need for alternative broadband access would be satisfied by intermodal competition among cable, telephone, power communications networks, cellular and wireless networks. No more FCC rules in favor of CLECs would be forthcoming. For a brief time around the year 2000, it was possible to subscribe to third-party high-speed internet access using the infrastructure of your local telephone or cable provider. After that, the most central of the Telecom Act’s  pro-competitive measures became, in effect, a dead letter. The much ballyhooed fiber-to-the home only began to actually reach a significant number of homes after 2010, and the only with reluctance on the part of the incumbents.14 As author Fred Goldstein put it, the incumbents had “gained a fig leaf of competition without accepting serious market share losses.”15

During most of the twentieth century, networked industries in the U.S. had sprouted in a burst of entrepreneurial energy and then been fitted into the matrix of a regulatory framework as they grew large and important enough to affect the public interest. Broadcasting and cable television had followed this pattern. So had trucking and the airlines. But with the CLECs all but dead by the early 2000s, the Communications Decency Act revoked, and other attempts to control the Internet such as the Clipper chip16 stymied, the Internet would follow an opposite course. 

Having come to life under the guiding hand of the state, it would now be allowed to develop in an almost entirely laissez-faire fashion. The NAP framework established by the NSF at the hand-off of the backbone would be the last major government intervention in the structure of the Internet. This was true at both the transport layer – the networks such as Verizon and AT&T that transported raw data, and the applications layer – software services from portals like Yahoo! to search engines like Google to online stores like Amazon.  In our last chapter, we will look at the consequences of this fact, briefly sketching the evolution of the Internet in the U.S. from the mid-1990s onward. 

[Previous] [Next]

  1. Quoted in Richard Wiggins, “Al Gore and the Creation of the Internet” 2000.
  2. Remarks by Vice President Al Gore at National Press Club“, December 21, 1993.
  3. Biographical details on Wolff’s life prior to NSF are scarce – I have recorded all of them that I could find here. Notably I have not been able to find even his date and place of birth.
  4. Schrader and PSINet rode high on the Internet bubble in the late 1990s, acquiring other businesses aggressively, and, most extravagantly, purchasing the naming rights to the football stadium of the NFL’s newest expansion team, the Baltimore Ravens. Schrader tempted fate with a 1997 article entitled “Why the Internet Crash Will Never Happen.” Unfortunately for him, it did happen, bringing about his ouster from the company in 2001 and PSINet’s bankruptcy the following year.
  5. To get a sense of how fast the cost of bandwidth was declining – in the mid-1980s, leasing a T1 line from New York to L.A. would cost $60,000 per month. Twenty years later, a OC-3 circuit with 100 times the capacity cost only $5,000, more than a thousand-fold reduction in price per capacity. See Fred R. Goldstein, The Great Telecom Meltdown, 95-96. Goldstein states that the 1.55 mpbs T1/DS1 line has 1/84th the capacity of OC-3, rather than 1/100th, a discrepancy I can’t account for. But this has little effect on the overall math.
  6. Office of Inspector General, “Review of NSFNET,” March 23, 1993.
  7. Fraser, “NSFNET: A Partnership for High-Speed Networking, Final Report”, 27.
  8. Brian Kahin, “RFC 1192: Commercialization of the Internet Summary Report,” November 1990.
  9. John Markoff, “Data Network Raises Monopoly Fear,” New York Times, December 19, 1991.
  10. Though many other technical details had to be sorted out, see  Susan R. Harris and Elise Gerich, “Retiring the NSFNET Backbone Service: Chronicling the End of an Era,” ConneXions, April 1996.
  11. The most problematic part of privatization proved to have nothing to do with the hardware infrastructure of the network, but instead with handing over control over the domain name system (DNS). For most of its history, its management had depended on the judgment of a single man – Jon Postel. But businesses investing millions in a commercial internet would not stand for such an ad hoc system. So the government handed control of the domain name system to a contractor, Network Solutions. The NSF had no real mechanism for regulatory oversight of DNS (though they might have done better by splitting the control of different top-level domains (TLDs) among different contractors), and Congress failed to step in to create any kind of regulatory regime. Control changed once again in 1998 to the non-profit ICANN (Internet Corporation for Assigned Names and Numbers), but the management of DNS still remains a thorny problem.
  12. The only quasi-exception to this focus on fostering competition was a proposal by Senator Daniel Inouye to reserve 20% of Internet traffic for public use: Steve Behrens, “Inouye Bill Would Reserve Capacity on Infohighway,” Current, June 20, 1994. Unsurprisingly, it went nowhere.
  13. Al Gore, “A Short Summary of the Telecommunications Reform Act of 1996”.
  14. Jon Brodkin, “AT&T kills DSL, leaves tens of millions of homes without fiber Internet,” Ars Technica, October 5, 2020.
  15. Goldstein, The Great Telecom Meltdown, 145.
  16. The Clipper chip was a proposed hardware backdoor that would give the government the ability to bypass any U.S.-created encryption software.

Further Reading

Janet Abatte, Inventing the Internet (1999)

Karen D. Fraser “NSFNET: A Partnership for High-Speed Networking, Final Report” (1996)

Shane Greenstein, How the Internet Became Commercial (2015)

Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (2018)

Rajiv Shah and Jay P. Kesan, “The Privatization of the Internet’s Backbone Network,” Journal of Broadcasting & Electronic Media (2007)

Not enough room on aircraft, so Australian commando killed Afghan prisoner

A United States Marine Corps helicopter crew chief has accused Australian special forces of shooting dead one of seven bound Afghan prisoners because there was only space for six on the US aircraft due to collect them.

The chief, “Josh”, flew 159 combat missions for the Marine Corps’ Light Attack Helicopter Squadron 469.

He told Australia’s ABC Investigations he was a door gunner providing aerial covering fire for the Australian soldiers of the 2nd Commando Regiment during a night raid in mid-2012, north of his squadron’s base in Afghanistan’s Helmand Province.

The raid was part of a broader joint Australian special forces-US Drug Enforcement Administration campaign targetting drug operations financing the Taliban.

Josh told the ABC: “We just watched them tackle and hogtie these guys and we knew their hands were tied behind their backs”.

He said the Australian commandos then called for the US aircraft to pick them and seven prisoners up.

“The pilot said, ‘That’s too many people, we can’t carry that many passengers.’ And you just heard this silence and then we heard a pop. And then they said, ‘OK, we have six prisoners’.”

The USMC chief said it was “apparent to everybody involved in that mission that they had just killed a prisoner that we had just watched them catch and hogtie”.

'Josh' claims the Australians shot one prisoner dead as there was not room for that prisoner on the helicopter


‘Josh’ claims the Australians shot one prisoner dead as there was not room for that prisoner on the helicopter


Credit: Kevin Frayer/AP

Josh told the ABC he and the other Marines “were pretty aware of what we just witnessed”.

“We just witnessed them kill a prisoner… This isn’t like a heat of the moment call where you’re trying to make a decision. It was a very deliberate decision to break the rules of war.”

Josh said that in an earlier mission that year another Marine witnessed Australian commandos shoot dead an unarmed man sitting on a wall nearby after they landed.

One member of 2nd Commando’s Oscar platoon who served on that deployment confirmed to the ABC that the Americans were unhappy with the conduct of some of his comrades.

It is unclear if the latest allegation is being examined by the investigation into alleged Australian war crimes in Afghanistan which has been underway for several months, the findings of which are expected soon.

Josh told the ABC that the British SAS “always had an incredible restraint, at least in the times when me and my friends worked with them”.

“Everybody else would step on the lines, but the Aussies would just see the line and just hop right over it.”

An Australian Defence Force spokesperson told ABC: “It is not appropriate for Defence to comment on matters that may or may not be the subject of the Afghanistan Inquiry.”

Japanese craft breweries are turning unsold beer into gin

But when Covid-19 took hold, the Olympics were postponed and the already struggling economy took a further battering.
With bars and restaurants suffering a significant reduction in business, beer sales in Japan dropped 26% by volume for the first half of the year, according to Bloomberg.
That’s a big problem for small beer breweries, says Isamu Yoneda, head distiller at artisanal drinks maker Kiuchi Brewery. With few customers in its brewpubs, and export orders canceled, Kiuchi Brewery was left with a stockpile of spoiling beer.
The company had to come up with a solution — and decided to turn the unsold beer into a different alcoholic beverage.
In April, Kiuchi Brewery launched the “Save Beer Spirits” campaign at its Tokyo distillery, offering local bars and breweries the chance to turn unused beer, a product with a four to six-month shelf life, into gin — a product without an expiration date.

A mission to save beer

In 1994, Japan relaxed its strict laws around microbrewing, sparking a boom in craft beer.
While overall beer sales in Japan have stagnated for the last decade, craft beer has been on the rise: its 0.5% share of the total beer market in 2007 had more than tripled by 2016.
Kiuchi Brewery — which began as a sake producer in 1823 — is one of many drinks producers that branched into craft beer when microbrewing laws changed. It has been making its signature Hitachino Nest craft beer for 24 years.
Yoneda says that turning beer into spirits isn’t a new innovation. Kiuchi Brewery has been using beer to make plum wine liqueur for years, and has experimented with gin liqueurs in the past.
Most gins are made with a base of grains like barley, rye or wheat, which are fermented into a mash, then distilled into a high-proof “neutral” spirit. The spirit is then distilled a second time with juniper berries and other botanicals, which add flavor.
The beer replaces this neutral spirit, skipping the mash and fermentation process, and jumping straight to distillation.
Gin is distilled in copper stills. The stills used by Kiuchi Brewery have a "swan neck" design.

Kiuchi Brewery asked participating bars to send in a minimum of 20 liters of unused beer, which would be sent back as gin, says Yoneda. Kiuchi can produce eight liters of gin from every 100 liters of beer. It then sends back the gin as a standard 750ml bottle of gin or as a sparkling gin cocktail, either in cans or in a keg for bars to use in their taps.
Yoneda says the beer base makes the gin bitter, but in addition to juniper berries, Kiuchi uses sansho peppers, lemons and mikan (Japanese oranges), which helps to “balance out the bitterness” with “citrusy notes.”
The bars only have to shoulder the cost of delivery, with Kiuchi Brewery offering its distillation service free of charge. “In these troublesome times, it is our responsibility to offer this service to everyone,” says Yoneda. “Most importantly, we want to keep the breweries and bar community alive.”

A sustainable spirit

Kiuchi isn’t the only brewery using beer to make gin.
The Ethical Spirits & Co was founded in February 2020 to help sake distillers turn leftover sake lees into new spirits, says co-founder Chikara Ono. When the pandemic hit and beer sales plummeted, Ono says the company began exploring new recipes to make gin from beer.
Revive gin is made with Budweiser beer, and flavored with lemon peel, beech wood, cinnamon and san'ontō, a dark, sweet sugar.

Revive gin is made with Budweiser beer, and flavored with lemon peel, beech wood, cinnamon and san'ontō, a dark, sweet sugar.

In May, they received a donation of 20,000 liters of expiring Budweiser from drinks giant AB InBev, who had a surplus of stock due to a drop in beer sales. The startup used the beer to create 4,500 bottles of gin.
“We had a problem of excess inventory and Ethical Spirits had the knowledge and the right ethos to create a product that we mutually thought would be a positive impact,” says Takahiro Shimada, head of marketing for AB InBev Japan, adding that the company wanted to support local businesses.
The Ethical Spirits & Co is still in the process of building its own distillery in Tokyo, scheduled to open in December, so they collaborated with Gekkeikan sake distillery to distil the Budweiser.
The beer-based gin initiatives are tapping into a rapidly emerging market.
Beam Suntory purchased British craft gin makers Sipsmith in 2016, and launched its first Japanese craft gin, Roku, the following year.

Beam Suntory purchased British craft gin makers Sipsmith in 2016, and launched its first Japanese craft gin, Roku, the following year.

Japan’s first dedicated gin distillery opened just four years ago Oin Kyoto, but the gin market is already estimated to be worth $209 million and is anticipated to grow by 4.4% annually over the next three years. Large drinks companies, including Japanese whisky giants Suntory and Nikka, have helped launch Japanese craft gin onto the international stage.
Drinking trends in Japan are pointing towards gin sodas and ready-to-drink canned cocktails, creating an opportunity for creative spirit producers to sustainably reuse surplus drink stock, says Ono.
“If you can essentially use unused or remaining ingredients to create something special and something premium, that’s great. It follows with our vision of trying to achieve a sustainable, circular economy,” says Ono.