Log in

Machinations of Sanity [entries|friends|calendar]

[ userinfo | livejournal userinfo ]
[ calendar | livejournal calendar ]

Gerrymandering Math [30 Jan 2017|02:28pm]
I ran across this summer school recently and read this white paper, and while I'm happy this sort of work was getting done I was struck by how shallow the analysis there seemed to be.

So I started thinking about what sorts of simple improvements could be made, that could still be computed automatically rather than needing to be hand-drawn or specified.

The first point that they mention several times is the idea that boundaries of cities, coastlines, and state borders can cause massive increases in perimeter without being any more suspect when identifying gerrymandering. This seems like an easily solvable problem, and I'll address a couple methods for tackling it here.

First, is to consider districts as being partitions of the convex hull of the region in question, rather than partitions only of the region. For each district in the region, include with it any point within the convex hull that is closer to that district than any other district. This sounds a little complicated, so here's an ugly picture I drew in MS Paint:

By any methodology used in the study (for example, Reock measure which compares the district's perimeter to the smallest circle that contains the district; or Schwartzberg which measures the ratio of the district's area to that of a circle with equal perimeter) this district will look like a hugely gerrymandered mess.

However when extended to being a partition of the convex hull of the state, more than half of the district's perimeter disappears while it's area substantially increases.

Here's a second idea: draw a straight line through any district border that is defined by water or the border of the region being districted (perhaps use the convex hull here as well) and use this for calculating the perimeter.

One method proposed in the white paper is to compare the compactness of the district to the overall compactness of the region. However this is a bad fit for a district like Florida-14 above: this is a small piece of noncompact land within a larger, more generally compact state. Consider also the northwest end of Oklahoma, the southeast bayous of Louisiana, or Manhattan island in New York.

My next thought is that gerrymandering essentially implies a purposeful bias in selecting district borders. This purpose could be detected, for example by looking at semi-local demographic data. It's not clear to me how easily available this data is but it must be for the districting process to be possible in the first place.

Here are a couple of thoughts on detecting whether the district seems to have unusual demographics relative to the surrounding area:

Compare the population (say, mix of ethnicities, major party vote share) of the district to its convex hull. This is much more difficulty with cities, since a rapid change in demographics is likely when entering an urban area, and it might be appropriate to draw district lines along these lines.

So a second pass at the idea: Consider a square region the same size as the district extending diagonally in each cardinal direction from the center of the district:

This district has a CPVI of D+21, but the district to the east (6th) is R+9, to the north and west (3rd) is R+14, surrounding the northern tip is the 4th district at R+19, and to the southwest are districts 10 and 11, at R+6 and R+11 respectively. The combination of ridiculous non-compactness with vastly different demographics than surrounding districts suggests that this is clearly gerrymandered for political purposes.

Also consider the similarly non-compact 22nd district, which is sits at D+3 and is surrounded by districts 20 (D+29), 21 (D+10), and 23 (D+9)--this is a competitive district surrounded by non-competitive districts--honestly it kind of looks like district 20 is sinking its fangs into the territory to drain local democratic support by 7 points.

Contrast with California's 22nd district, which has a CPVI of R+10, but sits in between some very diverse and reasonably drawn districts: CA-4 (R+10), CA-16 (D+7), CA-21 (D+5), and CA-22 (R+15). This district has a population that seems contiguous with the surrounding area, even with its weird outline (some of which is also drawn from the mountain range.) You can also see this in the districts of LA, which have a voting index that seems to correspond pretty strongly to whether they're a central urban area.

An alternative to this "semi-local" information would be to look at something like global information. If one comes into districting with the explicit goal of making as many competitive districts as possible, then something like FL-22 makes sense--though this also would result in a much more volatile political system, where for example the house could change from a supermajority in one direction to a supermajority in the other within one election cycle. Whether you see this is a feature or a bug probably depends on your perspective.

Alright, that's it for me spitballing. While dealing with coastlines and city borders still seems easily solvable, I do think this is a complicated problem and figuring out what the proper reasons to make districting choices are is a necessarily collaborative political process.

Of course we could also just have proportional legislatures, where parties have lists of candidates and get to put a number in corresponding to the share of the vote they recieved. I like the idea of cutting down on local special interests in general, and I LOVE the notion of the green party getting consistent seats in congress, but there are certainly problems with individual accountability and the escalation of political party machinery. 
post comment

Empathy for the Rural Poor [12 Nov 2016|08:14pm]
There are a few pieces of background for what I'm planning to write here.

First and foremost, there is this post on empathy and the American left. (also this one.)

Second, there's this old interactive post about diversity and incentives.

Third, there was a conversation I had with a friend of mine who came to the silicon valley tech world out of a very poor rural upbringing (near where I grew up and which I thus had some context for but not the same experience of).

Finally, there was a meme that went around Facebook asking people to post how many of their friends had liked the facebook pages of prominent 2016 presidential candidates.

My results were:
Bernie Sanders - 75
Hillary Clinton - 32
Jill Stein - 14
Donald Trump - 2
Gary Johnson - 2

To put it lightly, this is not representative of the American public.

It's well established in machine learning and behavioral economics that combining different, independent sources of information improves the accuracy of predictions. That is, there's tangible value in diversity outside of some kind of liberal ideology holding up diversity as a core value.

So here is a question:

Am I obligated to go to an effort to befriend members of the modern conservative movement?

On the face of it, there are a few fairly compelling arguments here.

  • Since I basically do not know anyone in a community which contains maybe one hundred million people, there may be some interesting ideas among them. In fact it is likely that there are many factions among modern conservatives, each with different ideas which I am generally not being exposed to.

  • Symmetrically, it is likely that many conservatives know few if any people like me, and could benefit from exposure to my ideas.

  • On a broader level, this sort of cultural interchange is the only way I can imagine resolving the tangled morass that is the current political climate.

Which, upon reflection, is not actually as many reasons as I thought it was going in. Which probably is a large part of the explanation of why I didn't find this argument compelling.

In any case, there are also reasons NOT to interact with conservatives, and/or holes in the previous reasons. For example:

  • Not all ideologies are created equal. If I am learning math from professors and research papers, I do not need to also learn math from high school teachers. I see some compelling cases that Donald Trump supporters could "grow up" into Bernie Sanders supporters.

  • I do get exposed to some amount of secondhand conservative ideology through things like this or this (not the best example but it's hard to search tumblr).

  • Most conservatives live far away from me, making it EVEN MORE difficult to befriend people who are already very culturally distinct from me.

  • And, most importantly, modern conservative ideology often explicitly calls for violence against me and people that I care about.

Just to reiterate that last point:

I am a bisexual atheist. I have many friends who are gay, or trans, or asexual, or immigrants, not to mention friends that are black or female or of mixed race.

The modern conservative movement, including specifically president- and vice president-elect Trump and Pence, have called for:

  • Conversion therapy for gay teens, which has a higher suicide rate than success rate

  • Religious tests for citizenship

  • Massive deportation of non citizens

  • Nationwide stop-and-frisk, which disproportionately targets black people and fails to reduce crime

  • Reduced availability of birth control

  • Violence against peaceful protesters at rallies

So after all of this...

I do want to have empathy for and communicate with people in the conditions of poverty that lead to fear of the democratic party and support for Trump.

I do not want to spend time with people who actively support, or even who are merely unconcerned with, committing violence against me and my friends.

I don't know what to do, but I do know that what I am doing right now is not enough, and this is a place to look for more.
post comment

What is to be Done [09 Nov 2016|02:13pm]
[ mood | Stress Naps ]

I wrote little blurbs on Facebook about how I feel sort of powerless and don't know what to do about the election results. And I think sitting around feeling depressed all day would be the easy way out but I'd rather not do that so here's my first step brainstorming.

My housemate is in school and he writes essays for classes, and it reminds me of how I never actually cared about or really "got" essays in school. But now I think I understand how the process of writing down a thought process can both be informative for the writer and persuasive to the reader, and how these sorts of deep interactions and especially the research behind them is valuable and necessary for discourse.

So one thing I could do, to really actually do something meaningful and make a difference, is I could start looking at local politics one issue at a time and writing essays on the issues. I could look up related issues in other places, how different solutions have worked out. I could run budget numbers for the state or learn about state bonds and interest. I could go through laws for my local county. I could treat it as though I was in law school or something, and push myself to keep going even when I just want to play Overwatch all day.

I think this would be a valuable way to spend my time, in some sense. I think entering local politics could be interesting and an effective way to make the world around me a bit better.

But there's a deep flaw in this plan, which is that I have no way to hold myself accountable to it. I've had plans like this before and unless I have a serious social commitment to something I will simply stop it because of depression. And sometimes I will stop even IF I have a social commitment to it.

So how can I make this plan easier? How can I change it from something that's clearly out of scope to something that could become a habit?

Well, one obvious idea would be to reduce how often I engage with it and how fast I have to. However this will also reduce my momentum and make it harder to develop a habit.

Another idea would be to reduce the level of expected rigor from my posts or analysis. This doesn't really help either, though, since once I get into a topic I expect to have enough inherent drive to understand it to a reasonable level of rigor.

I could establish a list of ideas to engage with and readily present it to myself, or try to build a group to engage with it with me, but then I'll have substantial startup costs that I may not be able to get through to actually do anything.

I got into a habit for about a day and a half of playing overwatch, and writing for a nanowrimo sort of story while I was waiting for games to load. But then my writing became world building rather than story telling and I slowed down and I stopped.

So maybe my first step should just be to reestablish that habit, and to start writing again. Forget about the details, just write. Write and see what comes of it.

post comment

Pokemon Go Sucks [03 Nov 2016|03:26pm]
Pokemon Go is a terrible game, which has added basically no value to the idea of a Pokemon mobile game, which is a brilliant idea and deserves better. This bothers me a lot, because the failures of Pokemon Go seem very obvious and easy to avoid, and in some cases easy to fix. It betrays the fundamental premise of Pokemon, and does this in a way that required designing new mechanics rather than recycling the old mechanics already existing in Pokemon games.

Because I keep getting worked up about it, I figured maybe it would help if I just wrote down everything that's wrong with Pokemon Go and how I would fix it if I had the authority. Maybe it won't help anything, but we'll see.

So before I get into the specifics, I want to establish what I think the important basis of a Pokemon game is. I posit that the core Pokemon experience has three elements:

1) Meet a variety of adorable and strange creatures
2) Choose a handful of those creatures to befriend, based on a combination of aesthetic and mechanical reasons
3) Bond with those creatures in a way that turns you both into badasses

Phrasing this in the MDA framework, the core aesthetics are discovery in the first stage, primarily expression with a hint of challenge in the second stage, and abnegation and fellowship in the final stage. So the dynamics of play in a Pokemon game should give you new experiences, the ability to react to them uniquely, consistent rewards, and positive interactions with others.

Hopefully it's clear how the core pokemon games deliver on these aesthetics, but let me also quickly review one of my favorite games of all time and how it fits in: Pokemon Snap.

Pokemon Snap is fundamentally a rail shooter. There aren't any new places to go, and you can't even control your movement! But it's still chock full of secrets. Three of the four items you get allow you to access dozens of new scenarios and interactions, and unlocking these tools feels great because of it.

You also have very little ability to choose what you see... however, you have a very large ability to choose what you look closely at. There's significant variety in terms of how hard or how easy it is to interact with different pokemon, and what the best possible pictures you can take of those pokemon are. Pikachu has a special appearance in every level, whereas there's only one place to get a fantastic picture of charmander. However, it is possible to get great pictures of any pokemon by following it closely. Every time you run the beach you make choices about how much time to spend looking for lapras, whether to use doduo to pause yourself or to take it's picture, whether to save pidgeys or save meowths, whether to get a good picture of chansey or eevee. Making those decisions on your own terms gives the game a powerful sense of self expression. And at the end of the day, you develop that relationship with the pokemon with the pictures you take--which make a great trophy not only for yourself but to show off to your friends.

So let's review the experience of Pokemon Go, starting with a few things Pokemon Go really gets right.

You walk around the real world, and encounter loads of different pokemon. The type of pokemon you encounter is based on the environment you are in, giving you an incentive to travel around and explore.

Ok, that's great! Now on to the rest of the game.

What the fuck are these. How do they contribute to the aesthetics of play at all?
They result in the play pattern of... people taking the same walking routes, and visiting landmarks. This seems fine, though there are mixed results: remember the koffing at the Holocaust memorial?
One of the things that's weird to me about this is that it's such a departure from the mechanics of the original game. You get a time based supply of items, which must be used for healing. In the main games, I almost never use items other than pokeballs, especially not healing items, except when facing the elite four.

So how can we change it?
Make pokestops like pokemon centers--they heal up your pokemon, while also serving as convenient meeting places for players. This makes the lack of pokestops in rural areas less of a problem, especially if you change the item system to a store-based system. Maybe you could give players a small number of pokedollars for swiping pokestops. But better yet, you could give them pokedollars for checking in together--giving people another incentive to meet up and play together.

Wild Pokemon
In Pokemon Go you encounter wild pokemon by clicking on them in their location on the map. So far so good I guess--the random encounters from the main games don't work so well here. But it can be difficult to click on them if there's a cluster, or if they're right under a pokestop. This difficulty doesn't really contribute much. Showing up on the map helps the illusion that the pokemon are in the real world... but that's Fantasy, which is actually NOT a core aesthetic! Plus while the augmented reality is fairly cool, the technology just isn't there to get pokemon to be positioned nicely in the picture all the time. Combine that with the weird sizes of different pokemon, the lack of 3D images, and the fact that AR recenters the pokemon in front of you and doesn't let you move closer or further, and the fantasy sort of falls apart.

Once you've encountered a pokemon, it sits in front of you alone as you take pictures or throw pokeballs at it. This is, in fact, even more boring and even less under the player's control than the safari zone--the part of the pokemon games that was so bad that they removed it in later generations because everyone hated it. Not having your pokemon interact with wild pokemon at all is also one of the many places they've eliminated any way of bonding with your pokemon.

I'm not even going to get in to tracking, as it's been argued to death on the internet. But basically, playing marco polo is kind of fun. Knowing that something is within two blocks of you is kind of frustrating. The new tracking system which seems to exist exclusively in San Francisco is actually almost as bad--going exactly where the game tells you to isn't fun either.

So how can we change it?
Have your buddy pokemon encounter wild pokemon. They can even just look at each other and do their little existing animations. So adorable! Then get some catch chance modifiers based on your pokemon's attributes. You don't have to fight them, just a bonus if your buddy is the same type or has type advantage, and a bonus if they're in the same egg group, and a bonus if your buddy came from this area. If you add in novelty pokeballs (fear balls for pokemon you have type advantage against, net balls for water and bug types, dusk balls for catching at night, etc.) then you suddenly have a great interaction with wild pokemon that also gives you an avenue of expression through your buddy.

Also, for Arceus' sake, stop pokemon from running away and give them the same capture rates as the main series games. Why would you change those things? It's more work and no benefit. Having a pokemon run away from you NEVER contributes to the player experience! In the games it's not AS bad since you can stay in the same area and be very likely to see another of the same species (for example Abra) and it's different for legendary pokemon (the dogs in G/S and lati@s in ORAS) but in Pokemon Go it's either insulting (a pidgey ran away? Really?) or horrifying (I've seen two wild porygon ever. How do you think I'd feel if they ran away and I couldn't catch them and never saw one again?)

While we're at it, let's also stop pokemon from, let's say the middle third of the CP spectrum, ever spawning. The way the game is now, seeing low level pokemon is good because they're easier to catch, and seeing high level pokemon can be good (though not for rattata...) because you may be able to use them in combat. But that middle third just sucks. I'd cut more if it were me, and I'd remove the level-based component of catch rates and cut the low level pokes too.

Your Pokemon
Once you catch a pokemon, there's so much you can do with it!

In the main games, you could level pokemon up, evolve them in a variety of different ways that take time to explore and discover, teach them new moves, fight with them, and even pet them in pokemon-amie!

In Pokemon Go, you can theoretically level pokemon up. However this makes them harder to evolve (?!?!???!) and is almost always a waste of resources. All pokemon evolve in the same way, giving you no choice over branching path evolutions and ignoring some branching paths and evolutions from outside gen 1. It is impossible to teach your pokemon new moves. Pokemon-amie, which would deliver not only bonding experiences but also doubles down on the Fantasy element than Pokemon Go has pivoted toward, would be agreat addition.

Finally, it's theoretically possible to fight with your pokemon... but pokemon are listed in terms of power level, not in terms of stats. In fact, you don't know ANY of the stats of your pokemon except its power level. So which pokemon you fight with is pretty much a done deal. I'll talk a bit more about this when I get to gyms--but basically if you like a pokemon, that means nothing at all for your ability to use it.

So how can we change it?
First, just put the stats and level of a pokemon on it's page. Why would you not do that? It clearly HAS stats--the reviews we get from the team leaders tell us that now. This CP crap is an uneccesary new mechanic.

Second, give your pokemon experience instead of your player! Jeez. Okay this is hard with the way the game currently is, but seriously.

Third, take off the player-level based level cap. This at least gives players the ability to, if they want, power up a pokemon that they really like at the cost of this being worse for their experience gains. This also would allow new players to level up a favorite pokemon enough to help attack and defend gyms--which is currently not possible at all.

Fourth, when powering up a pokemon with candy, have that candy reduce the cost of evolving that pokemon. That would allow players to bond with SPECIFIC pokemon. You could even do this ONLY for pokemon that level up through evolving, which would be a cool way to distinguish between pokemon!

Okay I got bored
So a couple other notes and I'll leave this to continue later if I feel like it.

-Don't give super powerful pokemon like dragonite and snorlax moves that cover their weaknesses like bullet punch, zen headbutt, and earthquake
-Make battles WAY faster, for example by restoring base power of moves from the normal game. Fights are boring.
-Add status effects or like baton pass as a special move that switches to another pokemon and gives them a full special bar so that the battles are less boring.
-Make swarms public, add a notice of where a swarm might occur or where a frequent spawn point is in the pokedex for pokemon that have been seen. Or just announce them on your website; like every day have a load of spawns of a randomly selected pokemon and a "find a park near you" button on the site.
-Make different pokemon more common around members of different teams.
-Make it possible to toss eggs so that it's easier to get 10k eggs.
post comment

Thoughts on the Curse of Chalion [05 Sep 2016|12:06am]
I'm going to be starting to contribute to a group blog project that kind of came out of nowhere, which will be starting with our discussion of The Curse of Chalion by Lois McMaster Bujold as we read it. I'm currently through chapter three, here's what I've thought so far!
The setup for the novel seems pretty traditional. The author is doing a good job showing rather than telling, building our idea of the setting with scenes from daily life rather than via infodumps. That's nice and all... but the setting is exactly the same as every other setting. We're introduced to an order of knights that is aloof from peasants but not really evil, a character with a troubled past and a wide variety of skills that will make them capable of figuring out what to do in every situation that comes up, and an ominous form of death magic with great power and great cost. Through the next couple chapters we see the naive and forceful young woman, the matriarch balancing her power within her domain against a patriarchal system, and a little bit of depth and a softer side to the god of death. These tropes are a bit more modern than the rest we've seen of the setting but they're still played very straight and common as anything.
All of this is well established, but it's also fairly typical. I have yet to really distinguish this book from the countless others I've read. By the end of the third chapter we have some characters established with their relationships intact and a view of the world. As a first fantasy novel this would be a lovely setup. But as a 50th or 500th fantasy novel one could get the same amount of exposition in with five pages, letting the reader's memories of other books fill in the rest of the details.
It's probably better not to, but this is the part of the book that I'd sort of fly through on a reading that I wasn't reporting on, and think not much of until much later on.
I'm not adding any tags to this post under the assumption that in the future my writing on the topic will be curated at
post comment

Emotional Labor [03 Aug 2016|09:49pm]
Anna put up a post about emotional labor earlier today and reading it I had a response which was somewhat frustrated--on the one hand, the notion that emotional labor is real labor is SUPER OBVIOUS and the idea that it's distributed in gendered ways is also super obvious and a big problem. I want that labor to be acknowledged and reciprocated. However in a sense, I also want that labor to be economized. I'm not that good at emotional labor. I try to engage in it, and I can do things like pay and manage my bills, do laundry, cook, call my parents on their birthdays, remember my roommates' allergies, etc., but there are huge swaths that I struggle with. In particular something that bothers me (not just in this instance but largely in related instances) is that emotional labor usually isn't well documented. If I want to know how to take care of my life, there isn't a big master list of everything I need to take care of. This sort of thing especially bothers me in legal situations, since there are obviously big consequences but I have no idea where I would even find an ACTUALLY COMPLETE list of to-dos when starting a business, for example. But it primarily affects me in emotional labor contexts, where there are things like "keeping a well-stocked pantry" that I don't really understand how to understand, or setting up a well-furnished room (the importance of having trash cans in bathrooms, for example, wasn't something that really hit me until my latest place).

All that said, there's still a strong element of the obligatory nature of emotional labor that bothers me. I don't want Anna to decide what's for dinner, make dinner, and clean up afterward without me contributing. That's fucking terrible and a huge cost and she shouldn't have to bear that! But I also don't want for her to decide what's for dinner, start making it, then have the expectation that I will contribute in a certain way at a certain level, and to still feel that because I wasn't voluntarily involved in the process but brought in by her that I'm not contributing. In that situation, I feel like I'm being coerced into giving up my emotional labor in the same toxic process that she's caught up in all the time. One way out of this is to order take out or to go out more often, which actually I'm very happy with. When I was eating at work and getting 10+ catered meals per week I was ecstatic about the situation. And obviously there's emotional labor going on there, but the people doing it are getting paid for it and it's more efficient since there's less face-to-face interactions of pleasantness and cooking economies of scale. This doesn't interact well with the fact that service workers are grossly underpaid but that's one of the reasons that I support higher minimum wages (or much better yet basic income).

There's also a thread of debate regarding how necessary emotional labor is. The standard line on one side is that all of it is absolutely necessary and on the other side that all of it is optional. But these are both obviously false. Things like ordering out or hiring a cleaning service, or letting many smaller relationships go by the wayside, are all real possibilities. And depending on specific circumstances some points are more or less important. And people have very different preferences--for example, I have no issue living out of a suitcase in terms of clothing organization for months on end, whereas Anna unpacks carefully even for weekend-long hotel stays. And how often and to what level different areas of the house should be cleaned is very much a matter of preference. I'd love to see good tools for aggregating different people's preferences and distributing emotional labor more equitably, but this isn't really the core of the point I want to make here.

The point is that while I often feel that much unnecessary emotional labor has been pushed forward and that actually dropping these tasks is a part of a reasonable path to reform in terms of how we organize our society, there is a lot of emotional labor that can't be dropped.

I'm probably somewhat unusual in how little I care about my family. I'm not very close to them, despite the fact that I like my parents a lot, and I don't put that much effort into keeping in touch with them (though I do call them and interact with them mostly on my own; plus Anna has a weekly phone alarm to tell me to call my dad. I should move that to my phone actually.) And for the most part I take care of myself (see: paying bills and cooking and doing laundry myself above) and I trust them to take care of themselves. But... that's not entirely true. I definitely need more from them than I'm getting right now. And because I've been lax about putting in the effort to stay connected, I often don't feel close enough to ask them for the sort of help that I need navigating the bureaucracies of life.

And more important, there are a couple of people who could easily get left out of my life entirely if I let them. My dad and my grandmother in particular, are not great about staying forcefully in contact with me and I know they care a lot about hearing from me and miss me when I don't talk to them for a while. Because I'm an adult, I set up a repeating google calendar event to call my grandmother once a month. Hopefully that will help. It's frustrating, because it still feels like a cost I'm bearing which I have no option to duck out from--but I guess the answer is that I do have that option. It's just a shitty option. Because not being part of the emotional economy means the emotional world falling apart. And if I'm being honest, my emotional world HAS been falling apart.

This is frustrating for me because I know that I'm actually pretty good at emotional labor, given that I'm a dude. I do actually hit my friends up out of the blue. And then they don't respond. And it kind of kills me. And I just don't know what to do about it.

Having been out of work for a couple of weeks, I legitimately feel like I'm dying. Like my heart just feels like it's falling sometimes, like in falling dreams. It's terrible not seeing people.

This has turned much more rambly and less purposeful than it started, but I feel like it's been productive anyway. So I'm posting.
1 comment|post comment

Humane Machine Intelligence [28 Jun 2016|03:50pm]
There has been a lot of discussion of artificial intelligence in the media recently. Stephen Hawking, Elon Musk, and Bill Gates have all made statements about the possible dangers of superintelligent AI in the future. Others, such as Jerry Kaplan and Robin Hanson, feel that much of the concern is overblown or misplaced--however these people still generally see AI as coming with a large number of serious problems that need to be addressed, and even Nick Bostrom whose book is among the primary writings on the subject agrees that mental pictures of "Terminator scenarios" are unproductive.

The current state of machine intelligence is well below the level of general intelligence that is popularly associated with the idea of AI. But the consequences of AI are already huge. Some are personal, such as Facebook broadcasting information about the recently deceased. Some are insensitive, such as Microsoft's ill-fated Tay or Google Photos tagging black people as gorillas. Others are world-shaking, like the 36-minute, trillion dollar "Flash Crash" in 2010 or the use of autonomous weapons by NATO.

The common thread through these incidents is the idea that machine intelligence is exploitable, complex, and unpredictable--and that research in machine intelligence generally does not draw on previous ethical traditions; perhaps because of its position in industry rather than academia.

It's easy to be intimidated by the prospect of machine learning, and adding highly impactful and complicated ethical considerations certainly doesn't make things seem any easier.

Fortunately, many of the existing problems are engineering problems. The economy didn't crash in 2010, and Google's self-driving car continues to be relatively safe despite occasional incidents. Designing learning systems that behave safely and promote the common good is possible, and there are practical steps one can take to keep automated systems in line.

Much of the process of humanizing digital systems is contextual, but here are a few possible starting points for considering the context and impact of machine learning systems:

  • Accomodate complex data that may not meet your expectations.

  • Explore incorrectly classified data points, and where possible preserve fairness across data classes.

  • Ask how data is gathered, for what purpose, and whether that process is appropriate to your context.

  • Consider different levels of autonomous operation for systems--when and how do humans intervene?

  • Review previous similar projects to see where they have issues and what best practices they've put forth.

  • Differentiate between metrics and goals. A self-driving car may use similarity to a human driver as a metric, but it's goal is to avoid accidents.

  • Make conscious choices regarding the tradeoffs between performance and interpretability.

  • Define properties of your dataset so that it is clear when a learner will or will not be reliable.

This may seem like an intimidating list, and it is far from complete. But every step in this process, from data exploration to evaluating a solution's robustness, is already a necessary part of machine learning. There is no separation between bridge design experts and bridge safety experts--building a reliable system that fulfills its goal is an integral part of building any system in the first place.
post comment

Writing out thoughts on Israel and Palestine [27 Jun 2016|06:51pm]
I read a post about Zionism on Tumblr last night and it made me want to organize my thoughts on Israel, since I haven't seriously revisited them in a while after having received new structural information.

Israel is a little bit of an info-hazard I think, and I don't really have the time or inclination to actually double check all the factual points that I'll be using, but I'll try my best to tag them all with citation needed and I hope my ending perspective will be pretty robust to challenges to any substantial minority of factual claims, though I'm not certain if my ending perspective will actually even be that similar to my current perspective (which is why I want to write this).

Let's start in the 1930's, which is not by any means the beginning of the situation but whatever. At this point I believe that Palestine is a British colony sort of [citation needed] which was pretty heavily populated [citation needed] but Brits didn't care about and/or weren't aware of the native Arabs [citation needed]. It is unclear to me what the provenance of the ideat that this area was related to any scriptural ideas of Zion [citation needed] or how true those ideas are [citation needed] but I also don't care much because I'm an internationalist atheist.

Around this time there was a lot of antisemitism going around [citation needed but I'm confident on this claim]. Partly in Germany, but also everywhere else. With the rise of the Nazis there were large movements of Jewish people from various places in Europe [citation needed]. Many went to the US or more liberal democracies, maybe UK and France? [citation needed]. Since the US and UK also housed a lot of anti-semitic sentiment, there was more widespread support for the idea of Zionism [citation needed] including among Germans and Nazis [citation needed] though Zionism was a bit of a fringe view among European Jews [citation needed]. As the whole Holocaust thing got going, Jewish people who didn't feel like they had to leave Europe started either getting killed or starting to feel like they needed to leave Europe [citation needed]. Britain, thinking "what if we got rid of this Palestine place we don't care about," volunteered it as a place to send Jewish people since the UK didn't want to take them [citation needed].

So a bunch of Jewish people who are like "well I don't want to be horribly murdered by Nazis" head to the newly formed Israel, displacing some Palestinians. This gets ratified as a move by the UN, with the native Palestinians having very little say in things and start feeling a bit miffed. [citation needed]

Now lots of other countries with strong anti-semitic sentiments are like, "hey, we can send our Jews away!" [citation needed] and more immigration to Israel happens, especially from places like western Asia, Ethiopia, Russia, and the US [citation needed]. This results in Israel expanding, especially because of settlements outside Israeli borders [citation needed]. At this point, Israel is pretty heavily supported by the US [citation needed] as a military outpost to counter the presence of the USSR in the region [citation needed]. This made it substantially more powerful than Palestine [citation needed], which caused Israeli settlers to be even more bold [citation needed].

At this point we're at the 1960's, a second treaty is signed with much wider borders for Israel and much smaller ones for Palestine [citation needed]. First generation native Israeli citizens are coming of age in a very diverse environment, with politics heavily fueled by interactions with other states, especially the US [citation needed]. The US grants nearly unlimited military power to Israel [citation needed], which many Jewish Americans feel bold about [citation needed]. Israeli citizens who came from the rest of west Asia or Ethiopia might feel defensively militant because of recent treatment [citation needed]. Nearby countries such as Egypt, Jordan, Iraq, and Saudi Arabia begin to resent Israel for a variety of reasons including anti-semitism, dislike of US and European interference, relations with the USSR, and religious/theocratic/totalitarian internal politics [citation needed].

Palestinians are justifiably upset about their land not being respected by Israel, but are powerless to respond on the global political scene, leading to radicalization and acts of terror [citation needed]. Israelis respond pretty callously, continuing to expand, placing ridiculous limits on humanitarian aid (John Kerry once saw a truck with pasta not being allowed in because rice was the only staple food allowed) [citation needed]. Israel relies somewhat on the narrative of Palestinians as terrorists and somewhat on the reality of Palestinians as extremely radicalized, and commits war crimes [citation needed]. Israel is also largely now populated by first and second generation Israeli natives, many of whom came from countries which no longer exist or are not friendly to Israel [citation needed], which makes the claims of other Arabic countries that Israel has no right to exist and their demands that Israelis "go home" nonsensical. Palestinians, having no real infrastructure or political recourse, and being of younger and younger average age (the median Palestinian today is 18 years old; literally a country of 50% children [citation needed]) responds by becoming even more radical and without any real understanding of the complexity of the situation [citation needed]. Internet arguments break out about how even though Israel is being pretty terrible to Palestine, if Palestinians were in power they would be even worse to Israelis [citation needed].

This brings us to the present day.

Israel is a mostly pretty reasonable country, with super weird politics and very unfriendly relationships with nearby neighbors.
It survives by having pretty much unconditional backing from the USA.
It also continually fails to enforce various UN treaties which have been increasingly favorable to Israel and increasingly unfavorable to Palestine.

Palestine is filled with violently radicalized children.
Trying to integrate a large number of radicalized people can lead to a bunch of problems. Which means that Israel basically just... doesn't. This maintains the vicious cycle.

My impression of BDS campaigns is that given that Israel is basically a country because of western and especially US financial backing [citation needed] we should maybe make that funding contingent on acting more nicely toward Palestinians. I'm a bit sympathetic to this, honestly.

It seems clear that powerful Israeli politicians like Netanyahu are completely unconcerned with Palestinian welfare [citation needed]. They could be doing a better job making the world a better place, and if we have some power to incentivize them to do this, we should.

That doesn't make the problem "easier" per se; what a resolution to the situation would look like is quite unclear. But a part of it is Israel actually enforcing its own borders on its citizens, so that they don't gradually encroach on Palestinian land, and continue aggravating the situation.
post comment

Computability is Nonsense? [21 Jun 2016|11:44am]
Start with a Godel-numbering of Turing machines whose outputs are real numbers and which can be described with a finite amount of information. In an ideal world, build a Turing-complete programming language, and take a Godel-numbering of programs which take as input a natural number (n) and return as output the first n digits of a real number (let's say between 0 and 1 for simplicity).

Define the set of computable numbers as the set of real numbers which are approximated by this process. There are clearly a countable number of computable numbers since each can be described by a computer program of finite length which can be collapsed into binary as a number, so they're in correspondence with a subset of the natural numbers.

So then, write out the decimal expansions of these numbers in a block, and build Cantor's diagonal element, which has a value at each decimal place not equal to the corresponding computable number's value at that decimal place. Then this diagonal element is necessarily not in the list. However, I have just described an algorithm for computing as many of the first n digits of its decimal expansion, and the programming language we used is assumed to be turing complete. This is a contradiction.


So what went wrong?

  • Is my definition of computable bad? I don't think that's really possible, since I'm just choosing a subset of the real numbers, which fail this test because they are uncountable.

  • Is it unclear what is meant by computable? Well I'm not actually asking about set membership; the proof is (not very effectively) constructive--given a real world system it would be possible to put computational bounds on everything and build this system. What would happen?

  • Is the set of computable numbers as I've defined it not actually countable? I.e., is there no way to write a single Turing-complete programming language that can be converted to machine code for a given system? That strains credulity for me.

  • Is Cantor's diagonal construction not turing-computable? This also strains credulity since it requires only very basic logic. Of course if you are something like a large-number-atheist and don't believe that 10^^^^10 exists then you're safe, since the point at which the diagonal element shows up in the list could just be so big that the index doesn't exist in real life.

Anyway I notice that I am confused. Seriously, what the fuck.

EDIT: The issue that arises is the halting problem; we want "computable" to mean "known to halt." When the diagonal element here encounters itself, it needs to know what to do or else computing it doesn't halt. If it is equal to itself it's on the list, if it's not then it doesn't halt and isn't computable.
2 comments|post comment

Types of Knowledge Reminder [19 Jun 2016|11:39am]
A long time ago I read a book on pedagogy, and among many other things, it outlined four types of knowledge. Since I just posted a framework below on ethics, I'm going to remind myself of this framework and make it more accessible so that I can reference it when designing the course outline I want.

Four types of knowledge:
declarative - "what" - what is the definition of an autonomous system?
procedural - "how" - how do we attach decision processes to actuators?
strategic - "when" - when do autonomous systems have decisive advantages over manned systems? When do they have disadvantages?
justifying - "why" - why are the above situations different? What are the specific factors that cause these differences?

I want to keep these in mind, especially because strategic and justifying knowledge are often laid by the wayside, and attempting to integrate these types of knowledge into AI systems is a large part of the short-term ethical project I want to begin building.
post comment

A Framework for Ethics in Autonomous Systems [16 Jun 2016|08:50pm]

Note: I wrote this in an e-mail a while ago, near the beginning of my rants on these things. So the context may be a little weird. I wanted to put it up here though since I'm posting everything.


Let's start by talking about the framework I want to use, since I'd like to reference it in basically everything I discuss. This framework comes from a report done by Lieutenant Colonel Artur Kuptel and Andrew Williams at MCDC NATO-ACT, available here: http://innovationhub-act.org/sites/default/files/u4/Policy%20Guidance%20-%20Autonomy%20in%20Defence%20Systems%20MCDC%202013-2014.pdf

The report is an excellent discussion of issues in use of autonomous weapons systems, however I will focus only on a few small subsections. First, it does a great job defining autonomy, as opposed to automation.

> Autonomous functioning refers to the ability of a system, platform, or software, to complete a task without human intervention, using behaviours resulting from the interaction of computer programming with the external environment.

I supplement this definition with a discussion from Paul Scharre in a paper from the Center for a New American Security (CNAS), which can be found here: http://www.cnas.org/sites/default/files/publications-pdf/CNAS_Autonomous-weapons-operational-risk.pdf

On page 9, Scharre defines three operational modes for autonomous systems, depending on the relationship of a human decision-maker in the actions of the system.

These are:

Semi-autonomous operation, or "human in the loop." An example could be the operation of a sewing machine, which manages the task of doing individual stitches, however it does stitches only when specifically instructed to by the operator.

Supervised autonomous operation, or "human on the loop." An example might be a toaster, which continues operation with the possibility of human intervention but with the general assumption that humans will intervene only in unexpected circumstances. And,

Fully autonomous operation, or "human out of the loop." For example, a thermostat, which carries out its operations based on its programming and temperature detection all day without human intervention.

These different modes of operation give rise to different sets of concerns, and have different advantages. Semi-autonomous systems make sense for standard supervised learning problems. They are unlikely to have immediate, disastrous consequences but can still perpetuate systematic biases if the nature of their decision-making is not understood. Supervised systems can be more useful in a broader variety of situations, such as for self-driving cars--where a semi-autonomous system is not very valuable in saving human labor. However in this case a malfunction may occur at speeds too rapid for humans to step in. For a more mundane example, remember how easy it is to burn your toast. Finally, fully autonomous operation is the most dangerous, as any issues that arise may not be addressed by humans until the problems have run their course. For example, a thermostat which heats inappropriately during the summer could cause a fire while residents are at work and unable to intervene.

Note how while dangers present themselves at a given level of automation, it may not make sense to reduce the level of automation because this will significantly reduce the value of automation. For example, having to set one's thermostat every time one desires a temperature change can be inefficient, compared with one that automatically warms the house slightly in the morning to help you wake up.

In order to understand how tradeoffs are to be made in the implementation of autonomous systems, we need a way to evaluate them. For this, we'll return to the NATO report. They outline four ethical dimensions of autonomous systems, which we will use to begin discussion of any system we come across. These dimensions are malfunction, misuse, unintended consequences, and benefits.

Malfunction includes many classic instances of problems with autonomy. A colorful example is given by Santa in Futurama, who declares everyone in the world naughty and attacks. A more realistic example could be given by a misclassification of a financial profile, which leads to a loan being denied to a qualified applicant. These issues are easy to conceptualize, but difficult to predict. They may also be easy to remedy after the fact; while it may not be possible to prevent an unpredicted accident, once failures are surfaced most machine learning systems can be trained on relevant examples and learn from their failings.

Misuse primarily refers to adversarial environments. This is of course very important for military application (the paper's original aim) but could apply to many situations. For example, a self-driving car that was taken over by hackers could be used to kidnap its owner.

Unintended consequences are a unique aspect of autonomous systems. In human endeavors there are natural sanity checks and guidelines that are followed by virtue of humans continuously making decisions about the external environment and understanding and reporting on their reasoning. If an algorithm for loans adds weight to certain features of an application, it could lead loan applicants to change their applications, perhaps taking out unwise credit lines before applying to affect the algorithm's underlying equations. Perhaps you've heard that self-driving cars are great if you're trying to merge, because it isn't dangerous to cut them off.

Finally, benefits are an important ethical consideration with autonomous systems. All the discussion of possible problems with autonomous systems could lead one to believe that they are never worthwhile--but any negative consequences of a self-driving car need to be weighed against the benefits of freeing up hours every day for billions of commuters and the possibility of reducing the number of fatal traffic accidents (over 30k per year in the US alone). Decisions need to value positive consequences as well as negative--while it might seem horrific to have self-driving cars result in ten thousand deaths per year, if switching to a self-driving fleet could achieve this it would save twenty thousand lives!

Of course, who makes these decisions is also an important consideration, and how liability is assigned when issues do arise works as well. While it would be a boon for humanity if we could reduce accidents to one third their current level, it is unlikely that Google would want to commit to $80 billion in liability to cover the cost of every accident in the nation.

post comment

Superintelligence and Complexity [16 Jun 2016|08:46pm]
I just finished chapter 10 of Bostrom's Superintelligence, on Oracles, Genies, Sovereigns, and Tools. It has been one of the more disappointing chapters of the book. Since I've followed along these sorts of debates for a long time, I've never really been convinced that the "tool AI" approach was "bad" in the way Bostrom and Yudkowsky seem to claim.

I'll present a few of my issues with the chapter here. The overall theme is going to be the difference between machine learning and autonomous systems, and the varying complexities of different intelligence tasks.

First, when discussing a "genie with a preview," he notes that the "preview" option could be applied to a sovereign as well. Just having finished listening to the audiobook of superforecasters, I'm inclined to take this "preview" possibility a bit more seriously. With current human forecasters, making predictions 100 days into the future is possible. And getting up to 300 days or so is possible with cooperative groups of superforecasters. But everyone's judgement falls to chance (or below) before getting five years out. The proposed reason for this is somewhere between the inherent unpredictability of dynamical systems and the combinatorial explosion of possible interacting factors. If we assume that it gets exponentially more difficult to predict as time extends outward, that gives us a range of about 10x from "normal humans" to "top human performance," which matches my intuitions reasonably well. An AI could be strongly superhuman by being about 100x as good a predictor as the best humans. This would certainly make it an invaluable strategic asset for anyone, with value approaching that of the world's economy. This would correspond to about twice the difference between average predictors and the best predictors, which means it would be able to predict events about 2 years in the future as well as most people can predict the next few months to a year. If the government of a small country had access to this they would be able to outpredict and outnegotiate the entire world's intelligence community handily. But if they wanted an oracle or a genie to give them a preview of a plan of action, by the time the preview reached the 2 year long mark it would be as uncertain as a normal human's prediction. And this is for an intelligence 100x as smart as the best coordinated groups of humans!

Having this kind of preview on a genie is great! Very few individual tasks that one might give a genie (especially constrained by some amount of domesticity) would have substantial direct impacts on that timescale. Most could be accomplished in days. But a sovereign is designed to have free reign over the long term. If it decided to take a course of action that it thought was most likely to lead to interstellar colonization, it would run out of certainty long before it got to the nearest star.

This has a side benefit, as well! If a superintelligence understood its own uncertainty, it would be less inclined to move toward the sort of "hair-braned scheme" thinking that leads to a certain amount of the nervousness and hand-wringing in the friendly AI community.

Of course, many assumptions lie in this setup. Two in particular I'd like to highlight. First is the assumption that prediction is exponentially hard in a meaningful sense. While this is intuitively a strong case, it's far from mathematically provable, which is certainly the desirable bar to pass when unleashing a genie on the world. Second, is the assumption that AI's ability to predict geo-political and psychological events will, when the AGI comes into existence, be within a few orders of magnitude of humans' abilities. This also seems like a strong intuitive case to me, as I'd expect that the level of understanding required to be a good forecaster is something like "AI-complete" in Bostrom's sense. If one can parse complex political questions, do the serious research needed to understand them, and combine that coherently into beliefs with well-calibrated uncertainty, I would be very surprised if AGI was far off. However the sorts of hardware-increasing tricks and algorithmic shortcomings that could be bypassed might easily lead to AGI getting a ten billion times speedup upon first being able to predict political events. Even this would be limited to seeing three weeks out as clearly as we see tomorrow, but at these scales obviously it's hard to peg down anything meaningfully.

My second issue is his discussion of the current world economy as a genie. He denies this comparison, because he can get it to "deliver a pizza," but "cannot order it to deliver peace." However this again blurs the definition of a genie. An AGI could start as something like a singleton with the total power of the world economy, and if it were it would certainly qualify as a superintelligence which constitutes an existential threat! But it would not be able to deliver peace on command. One could quibble about whether this is simply because we cannot phrase our desire for peace nicely, in which case the genie could oblige us. However if this is the true rejection I'd suggest we scale down the genie; what is the minimum level of power the genie requires to constitute a superintelligence? And in practice I am very interested in cases slightly less severe than Bostrom is. For example, the ability to produce billion-person or quadrillion-dollar results would certainly constitute a superintelligence in my book, though it falls short of the "total extinction" malign failure mode mark. His footnote 12 in this chapter drives this difference of perspective home quite solidly.

Bostrom essentially dismisses the possibility that a genie could have a goal structure that allows its requests to be countermanded, however MIRI and DeepMind recently released a paper which proposes to make great progress on the problem of corrigibility. It certainly does not seem to be a fanciful idea that we could ask a genie to preview near consequences and override actions that quickly leave our expected domain of outcomes!

Finally, when discussing intelligence in the context of machine learning techniques, Bostrom makes no distinction between the machine learning algorithms which provide predictions and guidance, and the autonomous systems in which they are implanted.

A supervised learning algorithm can, today, do a better job coloring in black and white photos than any human would be able to. Deep learning systems in the setup of AlphaGo or Watson could probably be made to make business decisions, and automated trading algorithms make huge numbers of financial decisions every day. I'll stay with the automated trading systems for a moment because they illustrate my point well. In May 2010, there was a "flash crash" in which, possibly provoked by a savvy saboteur, automated trading systems went into a feedback loop and drained over a trillion dollars from the NYSE over the course of 45 minutes. By the 45 minute mark, automatic safety systems had kicked in, causing the systems to stop trading. During the rest of the day, traders were able to undo many trades, with the heuristic used that any trade that occurred more than 10% below the starting price was "obviously in error."

While this was a large problem for the world economy, it was easily contained and while similar incidents occur, the safeguards that are in place regularly prevent them from spiraling out of control.

In this system, there are AI systems that vastly outperform human actors. These systems are able to make impactful decisions, and the world has seen the possible consequences of bad stock market outcomes in 1929 and 2008. But the flash crash of 2010 was news to me when I heard about it. The safeguards and reparative measures kept the damage down. And they did this because of the way one "superintelligent" system was implanted in an autonomous system.
post comment

I am Scared of AI [14 Jun 2016|06:23pm]

I am scared of the future of artificial intelligence.

I'm not scared because I think we're living in a simulation. I'm not scared because of the way humans treat monkeys. I'm not scared because of the Malthusian implications of Hansonian emulations. I'm not scared because I think we'll lose access to the trillions of stars worth of energy and life that we could obtain in the distant future.

The reason I'm scared about the future of artificial intelligence is that I'm scared about the present and the past of artificial intelligence. Everyone acknowledges that AI is a big deal. Even those who disagree with some of the bigger talking points on AI (Jerry Kaplan vs. Elon Musk, Robin Hanson vs. Eliezer Yudkowsky) generally take the position that AI is an important technology that will have a big impact.

Looking at projects like Watson, DeepMind, Google's self-driving car, and autonomous weapons studies, it's hard to argue that AI is a big deal.

And looking at incidents like Microsoft's Tay, Google's classification of black people as gorillas, Watson learning to cuss from urbandictionary, and the flash crashes of 2010 and 2013 demonstrate that caution is necessary in implementing AI systems.

As I see it there are two core points that are at the heart of the difficulty in designing AI.

The first is that people don't know what you don't teach them. Computers are even worse at this. You've probably heard jokes from people in IT or customer support about callers needing to be instructed to turn their computers on, or on the difference between an operating system, a desktop, and a web browser. But the difference between a human and a computer is even further than this.

Even when we build new humans, it takes them years to learn to stand up. It takes them years to learn to form words, and decades to put them together into nice sentences. It takes a lot to get us to cooperate with each other. It takes a lifetime to really learn to communicate with a single other human. Many of us are riddled with mental and emotional traumas from the difficulty of learning to interact with the world around us. And this is what happens with a process of making new humans, which has been attempted a hundred billion times!

With a computer many of the built-in abilities that a child has, such as separate modules for visual processing, auditory and language processing, and emulating those around them, are missing. A computer is as alien as a blind, autistic, sociopath with no concept of language. Everything we take for granted, from empathy to self-preservation to introspection, will not be present in any computer system, no matter what other signs of intelligence it exhibits, unless we specifically ensure that it is.

But there is some hope still! Many autistic and sociopathic humans can lead normal, pleasant lives that are constructive and morally good. Almost all blind people do! Severe disability is a version of alienation that we see all the time. And while our society does a god-awful job of accounting for it, we can question the institutions around us and build up new practices for dealing with different circumstances. Questions of how to produce empathy and common sense in automated and autonomous  systems are difficult, but they are problems that we can attack--just like problems of civil rights and mental health.

(Let me reiterate that I am scared about the future of AI.)

The second point is that human values are complex. As my examples revealed, when we encounter alien experiences, we do a bad job adapting to them. The complexities of our lives are invisible to us.

Humans give inconsistent answers to trolley problems, prisoner's dilemmas, and Pascal's wagers. That's because "logical equivalence" between phrasings of these problems often ignores contextual differences. The role of agency and self-determination in moral judgments is not yet well understood. Reasons for valuing different moral foundations such as harm, purity, fairness, and tradition are hotly debated. But even the debates that currently seem interminable are progress. Moral foundations theory is recent--it depends on measurements that no one thought to take until recently. Decision theory and it's relationship to moral philosophy and cognitive science is also a recent development, hinging on developments in mathematics and economics.

In the future, I'll argue that human values are analogous to inverse reinforcement learning, and I'll endorse some of the underlying ideas of timeless decision theory and Bayesian epistemology as being ontologically basic. These are tools that haven't existed for even five years in some cases, and have existed even in concept since 1960 in the oldest cases. As our ability to understand, quantify, and analyze our models of consciousness improve, so do our tools for understanding ourselves and our values.

Overall, I'm scared of the future of AI because it requires us to face down problems that are intensely difficult. I'm scared because the major companies that have the power to make real progress in this field seem to consistently fail to even compile best practices for dealing with the moral and ethical questions that surround any powerful technology.

But I'm also optimistic. Even where Microsoft failed, a huge community of twitter bot creators has the tools for fixing Tay. Where Flash crashes occur, automatic safety switches exist to stop and rescind nonsensical automated trades.

If we want to build a future that respects our values, and is filled with software that is able to treat us as people using the intelligence and empathy we are capable of extending to each other, there is a lot of difficult work to be done. But we can do it. And that's what I want to build into the world. A place for engineers to come to understand how to make their devices to a good job of making the world a better place.

Because that is always what you want, when building something.

post comment

Data and Consent [13 Jun 2016|04:28pm]
While this isn't necessarily something I see as a top priority when it comes to ethics in ML or AI in particular, one issue that arises a lot in data science and more generally in tech ethics (really in all ethics) is that of consent.

Think of an EULA. You've seen dozens, and signed them all without reading them.

Does that mean you have given your permission to be part of a human subjects research study? Have you really consented?

At best, kind of.

Consent is about choice. You have been presented with a choice: agree with the EULA, or don't use the product. But this isn't a real choice.

If you're searching for a job, and you choose not to have a LinkedIn profile, you may be materially harming yourself. If you want to code, and you don't have a GitHub account, you definitely are.

It may be hard to say what the effect on an individual is of not having a Facebook account, but certainly it affects them.

In a very strong sense, this choice is not a choice.

It would be pretty odd to walk around the street writing down how tall everyone is, what their hair color is, and to use this information without ever speaking to them.

When collecting data for a data science project, you are doing human subjects research. And that means there are ethical guidelines you should be following.

The first and most obvious is that you should be getting consent from the participants and subjects. That doesn't mean you should squeeze permission into an unread and unreadable EULA to protect yourself legally. It means that completely separate from the EULA, you should ask specific permission for use of each user's data. Specific lists of what data is collected should be available, and it should be possible to opt out of having any specific piece of data collected. This may be difficult to implement on an engineering level, and it may be destructive on a data level, but this is the price of human subjects research.

As a basic way of negotiating consent in a digital context, I'd suggest having at least three tiers of data-privacy available for any substantial service.

A first tier would involve not collecting, distributing, or utilizing data. A service sort of equivalent to the Chrome incognito window. This would appeal because of the ability to see a relatively unbiased account of services, that isn't customized to the user's interests. It would also be valuable for those who are very concerned with their privacy.

A second tier could involve collecting and utilizing data in anonymous ways, without distribution. Perhaps allowing a custom facebook feed or targeted ads, insofar as the ad providers are not aware of who their ads are seen by. These users would not be giving permission for their data to be used in experiments, or perhaps only used in validation sets. Considering the sort of data that can be collected in different experiments, this could remain a very useful source of data for less personal experiments, as well as an incentive for data scientists within the company to organize their experiments in such a way that respects user privacy. After all, this can give them access to substantially more data.

Finally there would be a tier of users who actively wants their data to be used for experimentation, and has little concern for their privacy. These users would be the equivalent of enthusiastic consent. People who have specifically volunteered for these positions, in a context where there is a true choice which would enable them to use your product without doing so, could then safely be reported on in scientific papers. They could even be a source of empowerment for researchers, willing to report back their opinions on changes, propose experimental ideas or stratifications, or even doing basic data analysis on publicly available user information from users who have actively volunteered to be measured.

Quite aside from the ethical and moral advantages of being able to truly respect user decisions, there are other practical possible benefits. Having groups of extremely engaged users can make user research easier and more productive, and having groups of conservatively half-engaged users could lead to a larger userbase and an easier process of converting new users.

Of course all of this is speculation, but it's important to keep in mind that understanding user consent is also about understanding user needs and user desires. When you make a product that respects people you are making a better product.
post comment

Non-explosive Intelligence [02 Jun 2016|11:33am]
[ mood | sleep schedule?! ]

I'm reading Nick Bostrom's "Superintelligence," which overall I've been appreciating (though much of the material so far is a condensation of common wisdom in the LW-sphere). I've noticed my first major issue with the book in chapter 4, in box 4 regarding the kinetics of an intelligence explosion. Here Bostrom sets up an equation for the rate of growth of an AI's general intelligence.

There are a number of assumptions made here and he does a reasonable job addressing these... except for one assumption which slides in unnoticed. In the middle of page 92, he suggests that once an AI overtakes human levels of intelligence, it will be the primary contributor to it's own progress toward increased intelligence. This creates the comparison between rate of intelligence increase and intelligence which leads to the exponential nature of the intelligence explosion.

This is not an unreasonable assumption, and throwing out his point because it is an assumption is probably not justified. However one can probably easily imagine a situation in which AI is effectively human-level without actually being good at computer science at all, for instance.

For example, consider an AI built from a number of highly specialized modules such as computer vision processes, language parsing, etc., as well as a decision-making process that connects them together. The decision-making process constructs short term goals (give that cat a hug) then collects information from specialized modules (vision module: I recognize a cat there) and uses that information to act via other specialized modules (movement module: move so the cat is closer. Arms module: hug the cat). These sorts of hacked-together processes seem pretty similar to how humans interact. And while the sort of decision-making modules that would be necessary aren't around, once one exists by attaching the correct modules you would have an AI that can drive cars, answer jeopardy questions, summarize books, plan trips, and generally do a sufficiently wide number of tasks to constitute a general intelligence that is "near the level of human", through a combination of being more or less advanced in different areas. One can further imagine that the decision-making module has some ability to add additional specialized submodules to learn new things; this part of decision-making may not even be that hard (current NLP programs often have markers for "other" or "unknown" which could be used to try to acquire new databases of info on the sorts of information that show up as uncertain). In this way, an AI might approach human levels of intelligence in enough domains to be considered an AGI, without having ANY ability whatsoever to do computer science.

Of course, it would be able to gain the ability to do computer science, however it would be limited by its ability to be taught--it's ability to translate it's intelligence into optimization power would be limited.

In a situation like this, wherein an AI needs to teach itself to do mathematics, a fast takeoff is still certainly conceivable as it could run through, for example, dozens of video lectures in parallel and run through programming assignments in internal compilers which give it much easier access to its own shortcomings as a programmer.

However it is also easily conceivable that a moderate takeoff would happen, as the AI takes a more traditional educational route through the process of becoming a computer scientist; in this circumstance it might exhibit some symptoms of weak superintelligence for some time before an intelligence explosion occurred without being even truly weakly superintelligent (since it might be barely-human in most domains including strategic pursuit of goals, but superhuman in domains already solved by computers).

While less likely, it would also be possible for a slow takeoff to occur in this scenario. For example, if the decision-making processes were not goal-oriented and the recalcitrance of teaching them to make original scientific discoveries were high, then one could imagine a significant population of such agents existing before one decided to learn computer science at all. It is also much more likely in this scenario that there will not be a fall in recalcitrance around human level, since AI is so far from being neuromorphic.

Of course even in this slow takeoff scenario could rapidly transition into the fast takeoff scenario, if it took some time for AI to seriously attempt to learn to improve themselves and it turned out to be less difficult (past some threshold) than expected.

And this all focuses on a somewhat specific scenario, specifically AI which is far away from being neuromorphic and which learns using something similar to deep learning to find solutions between slightly subhuman to significantly superhuman by learning on large datasets over the course of days. It's not entirely clear to me how much each of the specifics of this alternate scenario are necessary for a slowdown of intelligence explosion, or how much even all of them together would necessarily cause slowdown.

I include it primarily for completeness, because I do believe that the underlying point--that human-like general intelligence may not translate directly into optimization power--is importantly true and may substantially affect the takeoff curves.

post comment

Today is a bad day [20 May 2016|11:46pm]
[ mood | very bad ]

Last night was a very good night. I played D&D with my roommates, doing a one-shot while Grant is out of town. We drank three bottles of wine leftover from my dad's wedding a few months ago. We got pretty substantially into character and overall found everything very enjoyable. I ate a bunch of wingstop and a little pizza. I was very drunk and happy and tired.

Then when I got to bed I felt some pressure in my chest, sort of coming and going in waves. When it came, it was painful and it kept me up past 4:30. I was freaking out a bit, because it was unpleasant enough to actually scare me. I started looking things up online, and a few screens came up saying "seek immediate medical attention," which is when the anxiety really started to come on. It was 4am, I was drunk, I was alone, I was scared and in bed, and I really did not want to talk to a stranger on the phone. I debated stuff with myself in my head and rolled in bed and freaked out a bit and eventually pooped then fell asleep.

This morning I woke up and the pressure was still there (it's still there now, late the next night, coming and going) but less intense. Enough that I know I need to talk to someone, enough that I know I need to go somewhere and do something, enough to feel scared that I'm actually tangibly endangering my life by not going or doing anything. Enough that I didn't go outside all day, that I didn't go to work, that I didn't even work from home despite having my computer around and accessible. I didn't eat very much today. I ate my last mealsquare this morning, then around 8:30 I had soup and corn on the cob for dinner. That probably adds up to about 1/3 of my usual calorie intake and I didn't have any caffeine, so now I'm also concerned about what I'm feeling and how it relates to what I've eaten (especially because I ate SO poorly the night that the feeling started).

I have been talking to Anna over text all day. I really want her to be here with me, but she isn't. I can't do this on my own. I can't feel this, physically, and also talk to people about it or go places or take care of myself and what's happening is just that I'm not and a bunch of depression and anxiety and whatever the fuck is happening to me physically are molding together, and turning into an emergency with nonlinear interaction effects--I've been reading military reports on the ethics of autonomous weapons systems recently, and one including a discussion of the leadup to the three mile island incident. There was a lot of discussion of how various factors combined in unexpected ways to escalate the incident far beyond what was expected, even in the presence of people who did everything they could to deal with it.

There are a lot of factors going on for me right now. One factor is something is happening to me. Another is my depression. It's very easy for me to default to not doing anything. To staying in bed for hours and alternate between playing phone games and spacing out. Another is social anxiety; I really hate talking to people on the phone. I was going to say strangers, but it's not even that. I don't like talking to my parents on the phone. I don't like talking to friends on the phone. I barely even like talking to Anna on the phone, and that's the closest best way for me to interact with her that I have. But I especially hate talking to strangers. And calling something like an urgent care center or a nurse hotline necessarily involves talking to a stranger on a phone.

That's not all though. It also involves a chance that I will need to react to the conversation, immediately. It's possible that I will need to go to a hospital. I don't want to drive feeling the way I do, so that would mean either getting an ambulance (!!!) or an uber, which would be another set of talking to strangers on phones that I really can't deal with right now.

So with a ton of social anxiety being triggered my depression wins out, and I don't call anyone. And I feel guilty and anxious and my depression and anxiety just grow in a terrible downward spiral that ruins my day.

So... there is in fact a mechanism for dealing with this that I should have access to. I saw a doctor (once) near my work, to get my prescription renewed, and I got signed up for My Health Online with Sutter. On their website I should be able to access my medical records, schedule appointments, and chat with my doctor or someone else if she's not available.

Perfect, right? A way to interact with people and maybe figure out what I actually NEED to do without having to call strangers on the phone first. If it's not an emergency then I can calm down and stop feeling guilty, and if it is I'll get some adrenaline and social support to push me through the first difficult parts and momentum can take me from there.

But there's a problem--I didn't go on the website for several months after visiting the doctor, and so I didn't remember my login information. I tried resetting my password and verifying my identity but there were issues I couldn't identify and now guess what needs to happen? I need to call their number to get my account unfrozen.

If Anna were here I feel like I would have been able to do it. She could have helped me. But today is the day her dissertation was due. It's kind of a big deal. She's coming up tomorrow anyway. Because she is the literal best and I love her and I need her and she loves me and is the best and will help me and I will be okay.

I wish I had someone else that I could call on. I have work friends that I love but I'm not that close to most of them. And those I am close to, I'm not confident in how that closeness is. How much of it is me projecting the giant inappropriate crushes I have on everyone? And will they even be able to help me, to come over, to support me through phone calls, to drive me to a doctor's office? I have roommates but... I'm bad at opening up to them, and the closest one doesn't drive, and one is out of town... The friends that I feel comfortable with are far away from me.

I'm nervous about calling things out because I might post this to facebook and the relevant people might read it, but I'm writing this for the catharsis more than anything so I'm going to go a bit deeper.

I wish I could call on Julianne, that it wasn't a weekday today and that she wasn't miles away.
I wish I could call on Clara, that she wasn't insanely busy this week.
I wish I could call on Nicole, I wish that we could be friends again without the baggage that came up.
I wish I could call on my parents, I wish the shit that's going on wasn't sort of polluting my mind with respect to them.
I wish I could call on Tatiana, that it had been more than 5 weeks and wouldn't be super weird (but it definitely would and she doesn't have a car)
and there are dozens more people that I wish I could call on... but what's stopping me isn't them. It's me. It's that I haven't been able to open myself up emotionall the way I want to. That I haven't spent the kind of time and gone through the type of experiences that I need to to be comfortable with them. Maybe also I'm oversensitive to what everyone else is going through, the constraints on their time and energy (Surajit :( )

I don't really know what else I should be saying here, so I'm going to wrap this up. I just wanted to write it down and get it out, I guess. 

1 comment|post comment

Families [15 May 2016|03:57pm]
[ mood | Nervous ]

Anna and I have been talking a bit about coming out as poly to our parents. It's not something that we've gone through much effort to hide--she's posted a lot of poly-related articles on facebook and a number of pictures of herself with her boyfriend. But it's still something that neither of us has talked to any of our family members about.

Anna wants her boyfriend to be able to be part of the celebration of her graduation, and that's definitely something I don't want to veto or make rules about, except that her parents and both sets of my parents will be in town for commencement. Which means if he's around all of them will be meeting him. Which is maybe a bit much for him also, I really don't know how he feels about that and I rarely talk to him so that doesn't help matters.

Last night I freaked out a bit about the whole prospect. I am not entirely sure why it felt like such a big deal, but it did, so I'm here now trying to figure it out.

One part of it, I think, is that I don't really have a positive relationship with my parents. Which sounds way worse when I write it like that! What I really mean is, I feel most of the time like I maintain my relationships with my parents for their sakes. There are times when I want an adult to help me, like when I got in a car crash or when I was buying a car (hm I see a pattern?) or when I'm figuring out investments. But in those circumstances, I got just as much help from friends (Julianne and Anna, Clara, Kathleen and coworkers) as I did from my parents, and the sorts of decisions I settled on weren't really helped by them. I think it's been a long time since I really felt that they were operating on a level of more experience than I was--and my life is now sufficiently different from any of theirs that it's rarely obvious what I can learn from them. And then... I'm not really a super social person. I just do my own thing, and I'm usually pretty happy maintaining a day to day deal and not doing occasions or holidays, but doing occasional mental health sick days or lower pressure working from home days. I really like that, just maintaining my own rhythm and not really dealing with things outside of it. So when I interact with my parents, it's generally good! We're on great terms, they're very supportive, I love and respect them. But it's never something that I seek out, it's never a time when I feel comfortable getting at the depths of what's really going on in my life or on my mind. It's just sort of a chore.

So that makes the idea of coming out feel like making a chore harder. And in particular, now that I've invited my parents and some of them have RSVP'd and are making plans to be here, combined with wanting Anna to be free to treat her other relationship like a normal relationship, makes me feel like I HAVE to and don't really have any choice. Plus there's a deadline! And that feels like a lot of pressure and a lot of "I didn't sign up for this" even if I sort of did.

On the other hand, it's not actually just a chore. The sort of emptiness that I usually feel about my interactions with my parents have been developed to let me feel okay about being closed off about technical interests they don't have backgrounds in, media interests they don't share or talk about in the same way, and romantic interests that are... well, that include poly stuff. If we come out about that, then there will be a lot more things in my life that I can talk to them about. I can tell them when Anna's stressed out because of Andy. I can tell them when I start seeing someone new or hang out with someone I have a crush on. I could tell them about how I feel uncertain and apprehensive about what it means to build a second relationship when you have an anchor--how do you build intimacy with someone without centralizing them in your life? How do you talk to one partner about another partner? These are things that I would actually love to have a support network to help me deal with, and these are also things that my parents can maybe understand and talk to me about--they have much more experience in relationships than I do!

But even as I'm writing this I'm thinking about sharing it to my "close friends" group on Facebook, which has a number of people that I care about and trust a lot, who have a lot more in common with me. People who share more interests and cultural background with me than my parents do. People who already have some ideas about polyamory so that I won't need to explain things from scratch. People who can use analogies to Steven Universe and who can share the freshest, dankest memes instead of three layer copypasta casserole.

And then I turn back again, because having a support network isn't the only benefit from having an open relationship with my family (and Anna's). Anna loves holidays. And she will want to spend holidays as a family with her parents and mine and all her partners and maybe mine and theirs and a dozen kittens and a friendly polar bear and three corgis stacked on top of each other and all our close friends and also someone else plans it and she can leave if she wants and be buried in kitties because everyone is happy and doesn't need her to take care of them and I've basically run this analogy into the ground but. The idea is, she wants everyone to be able to get together for holidays and have a nice celebration as a family.

And I don't care about that at all. I'd rather that holidays didn't exist and everyone had slightly different vacation calendars to avoid the rush and there was a universally caring corporate culture so that this didn't actually result in exploitative labor practices which is why holidays are really important and let's cut that digression off right there. But I do care about Anna getting things she wants. And I care about my family getting things they want. And everyone who's NOT me DOES want to spend time together at Christmas so I guess I want them to be able to do that. And in the long term, if I want to live the best life I can imagine, Anna and I will be in a group house of mutually inter-romantic and queerplatonic close friends who contribute to raising each others pets and children communally. And if we want to invite our parents there for the holidays there will have to be some conversations that happen about what all that means.

Of course, what that all means is still a bit turbulent in my own head. I finally started seeing someone one month ago today who I have something resembling a relationship with now. And it's awesome and exciting! But also terrifying. I'm realizing how much of my conception of relationships is buried in escalators. I'm worried about how overeager I am to commit to things and how defensive I feel about the possibility of losing her. I'm feeling weird about little moments that I have with her that are similar to the little moments that I have with Anna. And I'm fundamentally concerned about how I don't really interpret other humans as having brains and thoughts in ways similar to how I do, and how I can start to resolve that as I (hopefully, gradually) get to know more people like her who are a bit more culturally LessWrongian.

I don't know if this really resolved anything or if it really even got all my thoughts out into the open. But it did get some of them, and maybe enough of them for people who aren't me to start sitting and stewing or giving advice to me (which I would honestly appreciate) or even just feeling like they understand me a tiny bit more. So whatever. I'm out of momentum and I'm going to ship it. We can fix the rest of it live.

(wouldn't it be convenient if I could feel that way about coming out? It makes me want to pretend that I do but empirically I feel more strongly and more defensive.)

1 comment|post comment

Generalizing from Fictional Evidence [08 May 2016|08:43pm]
[ mood | Contentious? ]

I posted on Facebook about my position on Civil War, and got into an extended argument with a number of people about the whole situation. Eventually it got a bit too heated and died down mostly, and I had some anxiety about it, but then I ended up talking things out with my housemate Lucas and we traced our disagreements down to the roots. The basis of that is a point that I feel sort of strongly about and wanted to write up, so I'm laying down this context as sort of an excuse/case study.

My impression coming away from the movie was that Iron Man (and in particular everyone around him; Natasha, Rhodey, and Vision) was totally justified, and that Cap's actions in the movie were basically incomprehensible to me. It seems like I'm in the (substantial!) minority on this issue, and before the conversation I didn't really have an understanding of how that was possible.

There were two major points which were brought to my attention that underly what I think differs between me and most moviegoers. The first is that Captain America, after being revived, saw the following actions from major government organization:

  1. Brought him out of freezing and lied to him about where he was, then used violence to attempt to pacify him when he discovered he was being deceived.

  2. Fired a nuclear missile at Manhattan, without time for evacuation, and against the recommendation of troops on the ground.

  3. Turned out to be controlled almost entirely by Hydra.

  4. Captured and experimented on the Maximoff twins.

That's a lot of super sketchy stuff to wake up to. For someone like Cap, it makes sense that he would take that information and conclude that, hey, maybe the UN would also turn out to be 90% Hydra and there'd be a big problem with the work they were doing. And of course Bucky, Wanda, and Scott all have mixed past experiences with various legal institutions, and I think we can all agree that Sam Wilson is just there because of his unrequited love for Steve.

And of course, he's right. If the Avengers ever started working for the UN, I'm 100% sure that they would write up a "UN infiltrated by Hydra!" arc, and everything would go to shit. And it would be a terrible arc, and a terrible movie, but it would prove Captain America right and make morality simple and be an easy plot device to recycle.

The second point that Lucas mentioned, is that I really like analogizing fiction to real life. As Lucas said, "I feel like we all agree in the real world that 5 people shouldn't be above the law cuz they're great at murder." I really hope we all agree on that! And I really hope that we all agree that in real life, the kind of para-military action that the Avengers are constantly engaged in isn't necessary, and that problems from government come from moral disagreements and poorly designed bureaucracy and not from secret nazi cultists.

Both those points make me more sympathetic to Cap in the movie, but they don't make me believe that he's right. And maybe notably, I'd really like to double down on the second point.

The world we are in has powers and possibilities that were literally unimaginable a hundred years ago. When our legal codes were written and our moral and ethical traditions began, there weren't cars. There weren't highways. There weren't high-rises. There weren't planes. There weren't international space stations and the internet and combat drones and trillion dollar stock market crashes caused by computer programs. There wasn't even a trillion dollars in the economy! And of course our legal and moral and ethical traditions have evolved, and are continuing to evolve, but they're far behind the times. Looking at urban planning in San Francisco, water planning in southern California, fiberoptic cable in northern California (north of the bay, I mean), self-driving cars, regulation of AirBnB and Uber, and without even leaving my home state I can see thousands of directions in which technology of many varieties is poorly handled by our society.

There is a long tradition of using science fiction to explore the moral geography of the future. In fact, it's often the only way to do so, since any speculation on things that do not yet exist is necessarily science fiction! And especially as we approach a future where artificial intelligence of various sorts, nearly all-knowing social media conglomerates, and consumer access to technologies like 3D printing and body modification, the lessons we take from the science fiction we read and write and watch and play will begin to come up in our day to day lives.

I don't really want to sort through a giant list of things that are bringing us into the future. What I do want is for us to be able to look at the stories we tell about technology, about how to behave with powers we don't understand or in circumstances that have never arisen before, and for them to give us advice that we should follow.

No human in the real world has the moral certitude of Steve Rogers. No one has the ability to invent technology on the fly that Tony Stark has. No officer has the unreplaceable personal relationship that James Rhodes has. Everyone in our world is flawed and uncertain. And I don't want stories to tell us that when you feel that you are right, that overrules the entire world asking you to stop.

Depending on the text of the Sokovia accords I might vote against them as a politician, or write amendments or campaign for changes. But as a member of the Avengers, as a paramilitary vigilante whose mission is to save the world, I would never directly oppose myself against the United Nations. This is the only legitimate voice of the world I am trying to save. Trying to save people who don't want you doing what you are doing is a bad idea. And if you are writing a piece of media in which the best and noblest character makes a good decision, and that decision is to use unlimited violence without any accountability, I think you're doing it wrong and you're harming the world.

Of course, I'm also the kind of guy who would totally dig a documentary style "The Sokovia Accords" featuring two and half hours of debate between T'Chaka, Tony Stark, United States President Obama, UN Security Council President Amr Abdellatif Aboulatta and UN Secretary General Ban Ki-moon about the exact structure of the accords and the specific interests of all kinds of different parties in the operations of the Avengers.

WARNING, DIGRESSION AHEAD: I'm also crossing my fingers that Marvel recovers the Fantastic Four license so that Victor Van Damme can emerge from the ruins of Sokovia as the disfigured and masked head of the new state of Latveria he is building from Sokovia's ashes. Then the plot of Black Panther could be about his lead of the UN investigation into Wakanda surrounding the disappearance of war criminal James Buchanan Barnes. Since Doctor Strange is coming out first it wouldn't be weird for Doom to have magic, and it would also be a neat twist for them to tease and introduce the Fantastic Four in Black Panther's media, when he was originally introduced in FF.
1 comment|post comment

Speech and Language Processing Curriculum Brainstorm [24 Apr 2016|02:45pm]
Anna and I have talked a bit about the idea of building a speech and language processing Nanodegree program. I'm talking to her now and I'm going to write my thoughts as they come up.

I wrote a bunch and it was complicated and we talked a lot and I think I should get a sort of full outline from the beginning before we get too much into details because we think about these things so differently.

The end goal in my mind is for someone coming out of the ND to understand the context of linguistic annotation work so that they can do it effectively and understand how it is used. Given annotated data they should be able to do basic language and speech processing tasks to do things like speech recognition and subject analysis.

They will begin with a basic understanding of data analysis, at least enough familiarity with machine learning to know what classification tasks are and how to evaluate them, and basic familiarity with python, stats, etc. This feels like some pretty steep prerequisites, but much of this is either pretty intuitive or clearly presented in e.g. DAND.

From a text perspective, first tasks should be something like using bag of words on the 20newsgroups dataset, then improving the model via stopwords, tfidf, etc. Very basic data cleaning for text.

From a speech perspective, first tasks should be sound data types--using PyAudio, the Wave library, etc., to start with audio files and process these into things like lists of pitches. A first project might be something like, process sound files marked as speaker 1 or speaker 2 to create a classifier for whether sound files are speaker 1 or speaker 2.

Going back to text, consider word pairs. Look at which word pairs end up being significant. Look at which parts of speech these word pairs are in. Use text data annotated with parts of speech. Try to extrapolate parts of speech using text data.

Back to speech: learn what formants are. Basic assessment of the IPA. Build a dictionary associating formants to IPA. Extract formants from very nicely formatted speech sound files.

Drop back: sound processing isn't necessarily speech. Pull a ton of features, hit them with PCA and cluster them. Observe how long this takes. Look at how different formants, pitches correspond to the basic features being extracted. Project onto predefined features--observe how much faster this is than PCA and hopefully results are equivalent. Dataset should be something like, car noises, speech, monkey calls, bells ringing, dogs barking, being clustered/classified.

There are a few more things I'd love to do or include here that I'm not really sure how to. For example, breaking up speech sounds into smaller parts. How do you draw lines between words? With extensive annotated data you could just use ML. What kind of ML does well? What parts of the IPA are difficult to tell apart? How can you use context to distinguish between probability and brobapilidee?

An awesome capstone project would be to take something like a long piece of sound data, extract words from it, try to identify them as parts of the vocabulary, then use those words to attempt to classify the subject matter.

This is mostly me spitballing, and I was hoping to spend more time going over this with Anna but she's busy and stressed out and very far away from the knowing-nothing-about-linguistics perspective and the project-oriented format and I guess we'll have to take on this conversation in more detail later.
post comment

Here's an Itemized List of Thirty Years of Disagreements [21 Apr 2016|12:14am]
[ mood | writerful ]

I went to a talk by Jerry Kaplan called "AI: Think Again" tonight. I have a lot of thoughts about it, and as the title suggests there are many places I disagree with him. However, I think the majority of those disagreements occur somewhat in the future, or else are stylistic. For example, he said he wouldn't want to watch F1 racing with driverless cars. WHAT THE FUCK, MAN?! That's just crazy talk. Obviously it's different than current F1 racing, but it's awesome. Robot cars, going fast and crashing gloriously.

In terms of the core points, there are a lot of places that I do agree with Dr. Kaplan and I have some hope of working with him on a course on ethics for workers in machine learning and artificial intelligence. I think approaching these ethical issues as engineering issues and taking responsibility for them as designers is very important and I hope I can produce something, perhaps with his help, that will enable many more people to engage more productively with the ethical issues involved in building autonomous systems.

All that serious stuff being said, here are my somewhat shorthanded notes, and digressions thereupon.

Dr. Kaplan said that he has seen no persuasive evidence that machines are on the path to thinking. Later in the talk he suggested that this is generally not the most interesting question to ask, that really what we are concerned with is whether machines are on the path to needing to be considered as agents and granted moral clout. I think very much that they are, even independent of other issues. Of course this is, at this time, universally NOT because we need to respect the preferences of machines, so much as because we need to respect the preferences of their users. However I do somewhat believe that humans might just be big old neural networks--and that a neural network built in such a way that it could pass a Turing test would have it's own moral worth and its preferences would need to be considered by society. He expressed after the talk that whole brain emulations are not something he would assign moral worth to--that statement somewhat concerns me! Though I've read enough Robin Hanson to realize that if you just grant legal status to emulations of people you get some bad Malthusian results real quick.

"Where are the robots?" There was some expression that after various barriers are broken by AI, such as Deep Blue, self-driving cars, and Watson, that there is an expectation of robots doing everything. On the one hand, of course automation is generally increasing human capability and reducing crew sizes more than replacing humans outright. On the other hand, say you build a neural network with a hundred hidden layers and plug it into a robot body with cameras and speakers and a microphone and try to teach it like a child. What do you think would happen? It's not commercially interesting but I'm curious philosophically why one would think it's necessarily impossible for this entity to attain sentience.

Dr. Kaplan expressed that while machines can perform the same tasks as humans at or far above human levels, this "doesn't mean that machines are intelligent in the same way as people." I agree that this is true of most AI tasks, however what about ML models that explicitly build "conceptual understanding" by mixing pre-built underlying models in different ways? How much do we need to know about our models and about our own brains? Is the substrate or even the computational algorithm (or being written in C versus Java versus Python) actually important in determining whether something is intelligent in the "same way as people"? With tools like LIME becoming available we may be able to start understanding a bit more "why" deep neural nets work the way they do, and I certainly can imagine the possibility that they think the way humans do!

Dr. Kaplan mentioned that IQ is meaningless--while I agree that it's a bad measure of what we call "intelligence" I'm skeptical of it being meaningless since it's powerfully predictive, even at levels >4 standard deviations away from the mean (though no longer linearly predictive). This is being a bit more pedantic than many of my points though.

Dr. Kaplan noted that in many arenas, incremental progress occurred for many years before breakthroughs, especially Deep Blue and self-driving cars. I suspect much of this was the case with Watson, but this was explicitly NOT the case with AlphaGo. Though a certain amount of this should be credited to Google simply pouring far more resources into the problem than was expected, the ability of these things to happen much faster over time is a specifically important concern!

In reference to a discussion in a press release by the creator of the Perceptron unit, Dr. Kaplan said regarding machine translation, "He was right! It was just fifty years later!" Meanwhile, he dismissed the creator's thoughts on other matters. This seems suspect to me; couldn't the other ideas have also been prescient, but simply be delayed to a time when computing power and data availability were more ubiquitous? A day like today?

Dr. Kaplan addressed the idea that neural networks are biologically inspired, saying that airplanes are as well. He said, "we're not worried that 747s will build nests." However the guiding principles from nature used in a 747 are basic mechanics, and very well understood. We understand neural networks only on a very surface level, and the entire purpose of the automation of them is to give them the ability to surprise us! They certainly do surprise us with happenstances like referring to black people as gorillas and turning into racist little shits at the drop of ten thousand trolls' hats. While this is a far, far cry from developing their own preferences and rising up against us, within the scope of tasks that machines are capable of they certainly do behave in an unruly fashion a lot of the time--and those scopes are ever expanding.

In reference to Hal 9000 from 2001, Dr. Kaplan said "How hard is it to say it's not okay to kill people in pursuit of your goals?" My first thought is that, in some circumstances, it WILL be necessary for automated systems to kill people. Even if this is not their intended purpose! For example a self-driving car facing a trolley dilemma. Dr. Kaplan did address that this is important--that in order for automated systems to make truly appropriate moral decisions, it will be necessary for them to know how to evaluate things like human life.

He also said "If you were on the engineering team for Hal 9000, you'd be fired." My thought following this was, what about the team designing Tay? Was anyone fired? Of course, no one died when Tay was released, but it was a CLEAR case of gross incompetence on the part of the team at Microsoft. Many community standards exist for creating Twitter bots, and they were repeatedly ignored. Blacklists were created specifically to safeguard the bot but they were woefully insufficient. In order for this to be comforting, I need to see real consequences visited on real people who are building real projects! Maybe it hasn't been publicized but I don't think anything major really happened to the engineers working on Tay. And then they re-released her and she fucked up again. I guess I just need to continue my twenty year plus trend of not putting any trust in Microsoft engineering. Apologies to my roommate who's an engineer at Microsoft.

Dr. Kaplan said that what is important for autonomous agents is that we teach them to abide by human social conventions--and this is an area in which there is very little research to be built on! I'm happy to point to this paper on teaching reinforcement learning agents social norms through storytelling, but one paper is not enough to make a field of research.

My other thought on that is that properly teaching agents social norms is very much an anthropomorphic analogy both in terms of the way he phrased it and in terms of... if teaching something social norms isn't a mark of sentience, I'll start getting curious about why I think other humans are sentient. Obviously I exaggerate slightly, but with the amount of shit he talked about anthropomorphizing AI the whole talk made me a bit salty when he started doing it.

One concept that I found very interesting from the talk was the idea of a Safe Operating Envelope. For example, when a self-driving car runs out of fuel or is confused by its circumstances it tries to safely come to a stop. This seems like a great design pattern and it underlies a lot of ideas that I've seen and approved of in a number of circumstances. That said, I think there are a lot of boundaries that can make the idea problematic. For example, if a self-driving car exits its SOE on a crowded freeway it definitely can't just safely come to a stop. If an automated stock trader is participating in an instant crash, how can it really tell this is happening? Obviously measuring a greater than 1% change in the overall stock market could happen but I'm a bit concerned. I guess part of this is I'd love to see more about what kind of warning signals can exist nicely for stock trading algorithms.

Dr. Kaplan mentioned another cool idea I'd like to hear more about regarding licensing for autonomous agents. If a robot is going to give you a massage, it should probably pass some standards to verify that it won't destroy your body--how would you come up with and enforce these standards? How can you interact with enough masseuses to gain the specific subject matter knowledge needed to come up with useful and coherent licenses, without alienating them at the prospect of being replaced by robots?

Dr. Kaplan also mentioned the idea of job mortgages, which I found a bit concerning as on the face of it sounds a lot like the student debt crisis. Of course, working at Udacity with the jobs guarantee I guess we're doing a version of this that I'm hoping will benefit workers a lot more than traditional universities will.

He mentioned that AI replaces labor with capital, which drives the value of capital up, which drives the concentration of wealth toward the richest. He then discussed how redistributive economics is necessary in this ecosystem. Of course this wasn't an econ talk, so I guess I shouldn't expect a solid answer as to how this should work. That said, he mentioned after the talk the program from the past of taking government land on loan and being granted ownership of the land if one works it for seven years. Granting public capital to citizens who put it to use does sound promising--especially in this era where capital can mean already-existing unoccupied rentals and second tier refurbished computers rather than the dangerously expansionist land grants of the past that created conflict with indigenous Americans.

Dr. Kaplan said that danger from AI is an engineering problem, not an existential threat to humanity. Why not both? These are (clearly?) not mutually exclusive, especially after the advent of nuclear weapons.

He also mentioned that he believes the future will be more Star Trek and less Terminator. I wonder what metrics he's using to compare our societies. It definitely seems to me that we already live in a crazy cyberpunk dystopia. We are a long way away from Star Trek.

On a number of issues, he referred to laws governing property, corporations, and pets. While I can respect that for building a legal system surrounding autonomous agents these are good places to start when understanding liability and culpability. However leaning on them for societal solutions feels a bit close to passing the buck to me. I'd much rather see a stronger engineering solution--a set of widely distributed best practices following from a coherent set of design principles that can guide us in building machines that will make the future better instead of worse.

Overall I got a reasonable amount out of the talk. A lot of my disagreements are about my being more concerned with taking personal responsibility for issues than he seems to be. Some of them are about my looking at a greater time horizon than he is. But regardless of all of this, I think we both agree that we need a set of best practices to exist, and we need to have conversations about them and improve them.

post comment

[ viewing | most recent entries ]
[ go | earlier ]