writing


in which i attempt to explain urbit

Published:

urbit is a complicated topic. writing an explanation of a complex subject forces you to evaluate your mental model and make sure it’s coherent, which is probably something i could benefit from. there are probably around a dozen essays like this scattered across the internet, but there are a few things that aren’t always clearly explained that i thought i could do my part to elucidate. more than anything though, i thought i would write an explainer about urbit that i wish i could have read a few years ago, to link to my friends to explain why i bought my star. you will forgive me if i’m a little bit off with the really low-level stuff, though, as it’s a bit out of my league.

urbit is a mind-bogglingly ambitious project by a company called tlon meant to change the way we use networked computers. in order to achieve this, it throws out the software stack that has been katamari’d on top of unix for the last four decades, and begins anew with a few tiny pieces of code that are intended to cascade into a new kind of internet: an internet where you control all of your own data, centralized web services and advertising networks that spy on you are outmoded, and you can (but don’t have to) interact with everything on the net using a single identity.

urbit is a network, but every node is a general purpose computer. at the core of them is nock, a function. this is the mathematical definition of computing that every component of an individual urbit (a ‘ship’) is built on top of. this function is combined with an individual urbit’s event log to produce a deterministic state. (i’m going to be real with you, this is the part i understand the least as i’m not a computer scientist or even a programmer, but i think i get the gist.) because every urbit is deterministically produced from this core function, all software on it can be updated live over the network. on top of nock, urbit installs an operating system called arvo, which has a native functional programming language called hoon. once your ship is up and running with arvo and the accompanying set of basic utilities (a shell, a messaging bus, a secrets vault, a web server, a revision controlled filesystem, &c.), you can program it to do whatever you’d like – general purpose computing. for the moment, urbit runs as a *nix application, but the ultimate (very long term) goal is to be a primary operating system for your personal server, controlling your digital life. the networking between urbits is encrypted, peer-to-peer, and built into the foundations of the system, but ships can interface with the rest of the internet as well. we’ll get back to this in a bit.

every urbit has a network address, which also functions as an identity on the network. think of it as a combination of an IP, a domain name, and an email address in one. this address is a number, but it is converted to a human-pronounceable format. there are five tiers of addresses, and each tier has particular privileges. at the top are 8 bit addresses, called galaxies, which act like root nodes on the network and sign software updates for everyone else. since the address is an 8 bit number, there are 256 of them, and they have a tilde’d one-syllable name like ~zod. galaxies can also issue ships of the tiers below them, stars and planets. stars are 16 bit addresses (so a little over 65,000 of them), and two syllables, like ~tamten (my baby). stars, like galaxies, are network infrastructure that perform peer discovery for the tier below that they also issue, planets. planets are 32 bit addresses, and look like ~fadfun-mogluc. there are 4.2 billion possible planets and they are the main point of urbit – individual personal servers for humans. because there are a finite number of planets, they have a value. this is meant to be some small but nontrivial sum – maybe $5-10, but ultimately up to the market. ships are cryptographic property, and as long as someone is the sole possessor of a ship’s private key, they are the only person who can control it.

two additional categories also exist – moons and comets. each planet can issue ships from a space of another 4.2b addresses under it, which are called moons. moons are permanently attached to the planet that issues them, and are envisioned alternately as urbits meant for an individual person’s IoT devices, or members of a family/group that jointly own the planet. comets are 128 bit addresses that are self-signed, that is not issued by a star or galaxy. for now they’re full citizens of the urbit network, but i’ve heard one of the devs mention in an interview that they’re not certain that comets will remain, so enjoy them while you can!

i mentioned that moons are ‘attached’ to planets; with the exception of moons, the tiers of ships in the urbit network’s hierarchy are bound by voluntary relationships. a galaxy or star issues a planet, and by default that planet uses the galaxy/star for peer discovery and receiving updates. implicit in this is some kind of business or personal relationship – as a star, you provide services to the planets under you. however, should someone decide you’re unreliable or for any other reason that they would prefer another star, they can very easily move to one. this is meant to incentivize stars to be good actors, in order to preserve their business. but this runs both ways; a star can stop routing for a planet for any reason as well, presumably for spamming. these incentives are meant to encourage good behavior and address the weaknesses that led to the current internet’s vulnerability to inexhaustible identities and sybil attacks. because the addresses are finite/have money value, and because there is someone holding you personally accountable for abusing the network, it only makes sense to behave. if you’re booted for spamming, you’re stuck trying to convince somebody else to route for a spammer (and they in turn will be accountable to others for allowing spam). urbit’s developers also envision some system(s) of reputation eventually arising organically, though they haven’t built one in.

the ability to trivially change your patron star (or galaxy) is probably the most political design decision that remains in urbit, but i think it’s a brilliant design. rules and norms are enforced in a decentralized manner, but personal or political disputes can be sidestepped cleanly. because the stars and galaxies are distributed across a wide variety of people, you should always be able to find a patron you’d get along with. in the distant future the network may splinter into mutually exclusive factions, but no single entity is meant to be able to control the whole network.

a common misconception about the design of the network is that planets are somehow only interfacing with the other ships under their star or galaxy, but this isn’t the case. routing and updates are federated to the tiers above you, but the network is fully peer to peer. stars and galaxies bounce requests for addresses between each other so that your urbit can speak directly with whichever ship it likes. with your urbit you get the decentralized network baked into the platform for free, so it’s straightforward to build decentralized webapps on top of urbit (relatively! assuming you can handle the arcane programming language). if you and your friends are already running ships, then you can all install software that adds twitter- or instagram-like functionality inside your urbit. the difference of course is that your twitter-like is speaking directly with your friends’ urbits to exchange feeds, and your own data is stored on your own computer instead of a corporate mainframe that spies on you.

the hard part of getting people onto a platform is the network effects of preexisting platforms, but tlon has a clever idea to surmount this. like i said near the beginning, your urbit can interface with the traditional internet. sometime in the mid-term future, urbit is meant to operate as an API aggregator, a mecha suit cockpit for all of your web accounts in one place. social networks exercise draconian control over other apps’ use of their APIs, but your urbit will use a personal API to scrape your personal data and feeds from the service – something that looks much more benign (in the short term) from the perspective of a twitter or fb, and much more difficult to quash should they decide to, since if it comes down to it your urbit app can just scrape web pages. (i should note that, for now, these gateways to other services are not available to you and i, though i believe they’re under development.) you can control everything in one place, and make them play with each other however you desire, since it’s all just data on a computer you can program. once your urbit is the most useful place for you and your friends to control your gmail/chat with friends/tweet at the same time, you’re already piloting your accounts from a decentralized network, so why not cut out the middlemen? develop or install another piece of software that provides the same functionality, but with your data controlled by you. chat on a reddit-style platform, install a decentralized git, use urbit’s messaging instead of email or DM, &c. i’m focusing on social networks because they’re most people’s primary use for the net, but the possible software is by no means limited to them (eg urbit has a basic web publishing server built into the core software that’s easy to use as a blog).

altogether it’s the old dream of a decentralized internet, suddenly possible; the big web corporations become obviated, because we don’t need to use their servers anymore. i bought a star because i really want to see this happen. when i was in middle school i learned about bittorrent and the piratebay’s infamous conflicts and technological challenges to IP law enforcement, which i thought was the coolest thing in the world (particularly the ingenuity invested in making it impossible to shut down), and decentralization tech has been something i’ve paid attention to ever since. the last five years or so especially has seen flourishing ideas and networks particularly growing out of the snowden revelations and cryptocurrency. urbit is not directly part of these software ecosystems (in development since 2002!), but it plays well with them. a point the CEO has made repeatedly in interviews is that decentralized networks don’t compete with each other in the same ways as traditional social networks, and in fact compliment each other. an urbit is a true computer, and in principle it can control your scuttlebutt and tumblr feeds together.

anyway, that’s the spiel. there’s a bit more i may come back and add sometime (eg governance, use of etherum as PKI, possible interplay with cryptocurrencies) but i think that’s a reasonable introduction. my girlfriend is surely sick of hearing me talk about this (but too polite to say so), so i’m putting it here for reference.


a poem

Published:

this is a poem i wrote about four years ago. it’s only the second poem i ever wrote, i actually still like it a lot, and i wish i had the courage to write more. the ever-present fear of looking like an idiot keeps the irony shields up.

drones are uneasy to me. i’ve always had that teenage boy fetish for sleek weapon systems and military vehicles, i’m not above admitting that. but drones seem like a premonition in a way that i’m not sure has an easy historical parallel. the obvious avenue for comparison is the atom bomb, but that’s both a difference of degree and magnitude. drones are something different – we’re hurtling toward a future of autonomously maintained empire, a historical shift where sovereignty is decoupled from manpower, an immediate after-effect of industrial automation and machine vision. swarm-enforced, mesh-networked, always watching.

i remembered i had written this when i was looking at the header image for this blog. if you don’t know – or you don’t know me – i have that image (general atomics mq-9 reaper) tattooed across my forearm.

anyway, here’s the poem.

reaper

a hawk loiters

at 60,000 feet.

there are cameras.

_

the network is ether;

supernatural framework to facilitate divine will.

_

we know that God is real now

and that He sees everything.

His angels blast

dull klaxons over asia.


domains and virtual hosts

Published:

i spent the better part of the last day working on getting a second site running on my vps. it’s live now – you can see it at hyperstition.al (get it?). i got lazy about theming and just used hugo with the theme i tweaked for this blog for the moment. this writeup is mostly for reference in case anything breaks in the future.

first step was registering .al, as cheaply as possible, which i ended up doing for 13/yr with a registrar called istanco (no points for guessing the nationality). it took surprisingly long for the domain to propagate – at least six hours, i went to bed eventually. i thought that kind of latency was relegated to the old days, but i suppose that is to be expected with an albanian domain.

the next step was configuring a second virtual host on apache. this part was especially confusing because the official documentation appeared to tell me to place the edits in /etc/hosts, when they actually went in httpd.conf (apache2.conf on ubuntu). this was followed by it seeing the document root as /var/www/html instead of /var/www/hyperstition.al/public_html, which i couldn’t figure out how to fix, so i sighed and just tried to move on to configuring ssl.

lets-encrypt is some seriously magical software. i went through the motions of following a DO tutorial for configuring virtual hosts on ubuntu, and in the most baffling step in this process, running the setup script somehow managed to crash my vps. as in, i had to log into the vm cp and turn it back on. i don’t have the faintest fucking clue how this happened.

i double checked my config files, remembered to strip out the stuff i had put into my hosts file, and ran it again – voila, not only did it work without a hitch, it began serving the new domain out of the proper document root.

so, now i have a cute albanian domain hack, and it’s got forced ssl.

the next bit took an embarrassingly long time to figure out. to update this blog, i run a relatively straightforward command:

rsync -av public/ reid@artorias.pw:/var/www/artorias.pw/public_html/

however, with the new domain’s folder i kept getting error messages along the lines of

rsync: recv_generator: mkdir "/var/www/hyperstition.al/public_html/categories" failed: Permission denied (13)

i spent a good hour and a half trying to figure out what the fuck was going on, eventually figuring it had something to do with folder permissions, and trying to learn how to parse ls -l output. eventually i guessed that using chown on /var/www/hyperstition.al to change the owner ought to work, and some time after that realized that changing public_html in that folder was the real key. and, finally, all is as it should be – rsync copies everything over without a hitch.

i’m probably going to change the theme to code-editor once i get around to writing something worth posting, but for now, i’m happy that everything is still humming along. and to celebrate: indian food tonight.


thoughts on the wikileaks cia malware dump

Published:

this morning, wikileaks posted a summary and partial release of a very large dump of internal cia programs related to their offensive cyber capabilities (nyt writeup, if you prefer). this kind of thing is like christmas to me – i’ve been fascinated by ‘forbidden knowledge’ since i was a kid, downloading anarchist cookbook .rtf’s on kazaa in middle school. to me, whistleblower doc dumps are like finding a new favorite author now, a rare treat where i’m handed a corpus of information to mull over and incorporate into my thinking.

there’s nothing truly shocking in the documents, besides the fact that this appears to be burning the cia’s entire cyber apparatus like a sheet of flash-paper – a few people in langley are having a very bad day right now. besides that though, the nuts and bolts of their ops are basically what one would expect from what we already know about state actor APTs – sophisticated multi-platform malware, including for routers, as well as ios and android exploits that circumvent encrypted chat clients (eg, signal or whatsapp) ‘left of crypto’ – that is, before the encryption is applied. all the curve25519 in the world won’t help you if your iphone is rooted.

a very interesting bit of trivia contained in the summary is that the cia malware and c2 servers are not technically classified, in order to avoid regulation – they’re instead simply obfuscated. servers must be certified to handle classified information in order to face the public internet, but instead the cia has sidestepped this, meaning it probably isn’t technically illegal to hand off those particular files – not that it would ever fly if the leaker is caught. as prior whistleblower cases demonstrated, the IC is willing to (attempt to) retroactively classify documents, or ground diplomatic flights in order to nail ‘traitors’.

the UMBRAGE program described is essentially a trove of techniques and malware attributable to other actors – one would assume russia, china, and probably iran are on the list. this is meant to conceal, mislead, or obfuscate among other evidence anything that could point to an operation being perpetrated by the US. i’ve already seen this used by partisans to throw suspicion on the dnc hack, but i don’t find this very convincing. i should write a separate post about my thoughts on that entire topic, but for now this link covers most of what i would bring up.

what you see in the files is – if you’re like me – some really thrilling scraps of highly sophisticated state actor methods, vectors, and practices. smart tv malware that keeps the tv in a false ‘off’ mode in order to surreptitiously record and upload audio is probably the sexiest, tentative interest in hacking the computers in newer models of cars the most frightening (one is reminded of the questionable circumstances surrounding the death of a particular journalist).

what you won’t see in the dump is information about programs targeting terrorists. this isn’t a result of redaction on wikileaks’ part, as far as i can tell – but this points to a fact that’s little discussed when US IC spy programs are that the bulk of them are focused on diplomatic spying.

Among the list of possible targets of the collection are ‘Asset’, ‘Liason Asset’, ‘System Administrator’, ‘Foreign Information Operations’, ‘Foreign Intelligence Agencies’ and ‘Foreign Government Entities’. Notably absent is any reference to extremists or transnational criminals.

to make my position clear, i believe spying programs should be restricted by outright burdensome regulation if there’s even the possibility of incidental collection from americans. i don’t trust the good faith of the IC, and frankly i think anyone who does is a sucker or a fed. i’m an earnest fan of snowden and a paranoiac with regards to personal infosec practices. but i almost never see it emphasized that diplomatic spying is a necessary function of any government, even as i see calls for the abolition of the entire apparatus.

there are two primary camps you’ll find in online discussions of the IC, apologists and critics. critics are the loudest, and i count myself among them, but their motivations push them to emphasize the dragnet nature of the publicly known programs (almost always nsa sigint, since that’s what we’ve known about up till now). apologists, on the other hand, will almost universally default to the necessity of preventing terrorism. what these documents, and the above excerpt, illustrate is that spying on foreign governments is still bread and butter, at least for cia cyber ops. if i were to offer a word of advice to apologists, it would be to emphasize that this is the way the game works – all governments spy on each other, and one is at an unspeakable disadvantage if one doesn’t play the game. many (yours truly included) are poisonously cynical about the terrorism justification.

it’s going to take a while to go through these, but i’m one of the people who manually pores over every classified document that gets released when these things happen. everyone has hobbies 😛

as an aside, i’ve been running a tumblr blog for the last few years with interesting or ‘cute’ excerpts from snowden documents and other sources related to IC classified programs. if you’re interested in that kind of thing, you might check it out.


foss software

Published:

i switched to linux about 2.5 years ago, after using windows my entire life, starting with 3.1 on a desktop in my room that i got in 1997. i have to say i am absolutely delighted with the state of desktop linux. i tried using linux on my computers a few times over the years without having it stick – first with suse on a laptop in 2002 using cd’s i picked up at a used bookstore, then with ubuntu in about 2010. both times i got fed up with minor annoyances that required much more work to fix than i was used to, inside an unfamiliar paradigm that frustrated me.

what changed this time? well, i put openwrt on a router, and had to use vi to edit the configuration. i finally sat down to learn basic cli functions beyond simply copying commands verbatim from tutorials, and found that it had a relatively straightforward logic to it. i wanted to try unix-like operating systems again and see if i could hack it. this time around, my computers worked flawlessly out of the box, and i didn’t have to fix any fundamental issues. i did one online tutorial in particular – learn the command line, by codecademy – which left me feeling genuinely excited about understanding how computers worked for the first time since i was a kid.

i’ve since switched to linux on every pc i own, and i’ve become one of ‘those guys’ when i talk about computers with friends now. but i can’t help it, i think this stuff is genuinely really fun.

currently i use xubuntu (ubuntu with xfce) on my desktop, an asus vivopc – nothing very special, but a cute little box – and galliumos on my toshiba chromebook 2, which is an xubuntu derivative distro meant for chromebooks.

galliumos is a real treasure – it’s very lightweight, very fast, and all of my hardware works perfectly. my toshiba was chosen almost arbitrarily, just trying to balance between reviews and pricepoint, but i made a really good choice with it on one particular point: it comes with a 16gb ssd, which i swapped out for a 128gb. that was the main constraint on it in its stock configuration, but with the 128 i don’t have to worry about a crowded disk. even with full disk encryption (via luks, which is selected during install), it goes from boot to logged in in under ten seconds.

i intend to dive in a little deeper at some point, and will probably try installing arch on a machine once i’m comfortable restoring from backups, but for now i’m very pleased with my software configurations and workflows.

anyway, the reason i wanted to make this particular post was to share some of the software i’ve found that i’m excited about – the stuff that justifies switching to linux, in my opinion. so are some of my favorite examples:

  1. sshuttle

    • this was the first piece of linux software that got me seriously excited: if you have a user account on a remote server you can ssh into, you can tunnel your whole connection through it – a poor man’s vpn. vpn’s aren’t expensive to begin with, but i pay $15/year for one with ramnode. it’s also much simpler than configuring openvpn or similar on your server, and requires much less overhead (somewhat necessary when your vps has 128mb of memory!). one small issue: on every computer i’ve tried running the sshuttle client on, on the first attempt after a boot, it will give an error message and drop the connection immediately. not to worry though, because it will work on second attempt with no hitches. if you have rsa key auth set up on ssh, you don’t even need to enter a password. simply alias sudo sshuttle -r user@server 0/0 to ‘proxy’ or something. voila! you have transparently forwarded your connection. a friend uses it with a vps in the uk to watch geolocked content.
  2. vimwiki

  • a personal wiki inside of vim, which uses markdown, and autoindexes entries. i use this to take notes for school, and keep my working directory in a folder in spideroak to sync between my computers. it also has an excellent export to html function. mostly, i’m using this to work on my vim abilities.
  1. atom
  • cross-platform, extensible, and beautiful text editor developed by github. i use this for editing the markdown files for this blog, primarily, though i intend to use it if/when i force myself to learn some programming. it’s a real pleasure to use, especially with a dark theme.
  1. borg/borgmatic
  • cli backup software, which deduplicates and supports encryption. backup to anywhere you can ssh into, and encrypt it if you don’t trust whoever controls it. borgmatic is a script to automatic the backups, and enforce rules – eg pruning old backups.
  1. pass
  • cli keepass/lastpass alternative. saves each entry into an individual gpg-encrypted file, making it similarly portable (and combine well with) borg. to be honest, i don’t use this because i’m stuck on lastpass’s convenience, but i’m glad it exists. if i (or you) ever decide to switch, there is fortunately a script to export lastpass-to-pass. if lastpass ever has a security breach that shakes my faith in them, this is what i intend to fall back onto.
  1. redshift
  • this is simply an open source clone of f.lux – if you’re unfamiliar with it, it shifts the light temperature of your monitor towards warm/orange tint as the sun goes down. the idea is that blue light disrupts your internal clock and can hurt your sleep. instead, your monitor has a warm cfl-like glow as the night goes on. i find it much easier to look at, in any case.
  1. fish
  • shell with syntax highlighting, autofinishing for commands and tabbing through flags, auto-generated manpages(!), and generally a much friendlier feel than bash. i’ve just switched to this as my default shell and i’m very pleased with it. “finally, a shell for the 90s”.