So I seem to have this razor-fine balance between art, engineering and Not Wallowing In My Own Filth that I have to meet or I feel angry and depressed and off my game. And if I go far enough off it I'll start to do things like desperately drawing at midnight so that my art demons don't hound me all day at work the next day.
And when Art is not getting enough love... it gets... tempermental.
It decides not to work on established projects-- those smell too much like work to it. It needs fresh ideas, fresh motifs, fresh methods. I end up downloading new tools, learning new scripting languages, anything to keep from doing something I've done before.
It's like my aforementioned Anti-Depth Filter-- once I'm good at something-- I'm past done with it. This isn't a short attention span in the traditional sense: I can get swallowed for DAYS in one of these things, but days isn't months, to say nothing of the years I need to really /plow/.
I've done a lot of things that are really cool so far in my life, but I'm learning that there are all sorts of weird, delicate balances I have to maintain to really be long-form productive.
Also I think making tweaks and tests for high-performance memory is kind of eating up most of my staying power lately. 'least it pays good :\
What if geeks are better off unpopular? What if making science and technology "uncool" creates a drive in the technically minded to prove that "cool" is bullshit, that all the people more interested in being popular and having influence are the ones crippling themselves--
Okay that statement's obviously false. It's more a sign of a deeply sick civilization that has its priorities upside-down due to being an empire for so long its forgotten how to be a country, and its institutions and traditions have all degenerated into entitlement-complexes driven by short-sighted nationalism, and it's got to stop.
Maybe this geek sheik stuff is just the first signs of people waking up to the fact that the culture of "cool" as was defined before was empty and self-poisoning. Perhaps it really is time to get excited and make things.
Read a really interesting article in Wired just now about how opening databases to the public often empowers the already-data-savvy over the many non-digitrati out there, and I think this underscores another one of those yet-unmade institutions I keep saying we should be building, the public number-crunching sphere.
Someone (IE people like me) should be making software that knows how to read various public databases and grind them into actionable information and wisdom, and it should be usable. Along with the software there should be wikis and FAQs that explain what's in there. And along with all THAT there should be resources "on the ground" in places like community centers and libraries to show off, explain and "sell" the tools.
Oh man, ever wish you could remember more of the dream you had because of how awesome it obviously was? Last night was definitely one of those, I mean, it had falling through the EE atrium onto giant statue of Rarity, teaching the CMCs how to build opamps, telekinesis, trolling, and at one point I was working in a soda shop where the head waitress was making a gorgeously mad strawberry smoothie recipe dedicated to Rory from Dr. Who.
Also I was Twilight Sparkle a few times, which was awesome.
The switches, the on-offs the zero-ones the unitary fibers of the digital flowering are now countable atomic widths and near-countable atomic heights and lengths, and soon all will be countable and soon they will shrink to dozens of atoms in total and as they do so their switching times and mechanisms will shrink down to femtosecond motions of single electrons, and while this won't really spell the end of us as the cutting edge until we build up, build up we will, and dozens of atoms will be adders and hundreds processors and thousands computers, and no matter what the stuff we make this computronium from, a few of your red blood cells will be quietly replaced with computers armed to the teeth with algorithms, and with charged surfaces that sift and rearrange its apparent surface to attract and repel the bulky molecules of our bodies.
First we'll just draw maps with it.
There will be a beautiful soaring age lasting perhaps an entire decade where the maps we draw of the brain as a million wirelessly-enabled bloodcell-sized computers survey us, occasionally flashing faked passports at white blood cells thinking they might be pollen or viruses, will show us amazing wonders, and we'll learn of the symphony in each of us with a depth and a richness that will make our previous guesswork sound hollow and silly, and with each new discovery, the vitalists will howl with laughter at those of us who said we'd ever understand it all at the rate we were going.
Mr. Kurzweil of course will just continue smiling that infuriatingly placid I've-got-a-secret smile of his.
He will be smiling because the second act of this coming decade of thrills is the real kicker, the shocker, the one nobody saw coming because someone somewhere is going to program these little bastards to actively intervene in the program. First the biohackers will inject themselves with a virus that puts light-sensing vacuoles on their brain cells, and LEDs on their blood monitors, and they'll have heads-up displays and on-board supercomputers and someone will be saying they're actually dumber than anyone before them because they don't really memorize or even properly look up anything anymore, it's all on the computer that's gushing around inside them, past the heart, through the lungs, over and over, swapping packets, on and on.
But by the time the first heads-up displays are properly working, someone will have figured out how to make stable computronium tumors.
Little knots of metal and glass clump together at the base of the skull, and in the frontal lobe, and little strands crisscross the brain, held together only by a little electrostatic cling from each cell-computer, beguiling the immune system with false signals easily, and starting to really read-write with the pulsing, stormlike computation of the mind. Somewhere someone is saying it's no real disruption, no real difference, still a tool and there are still no signs of any Singularity anywhere.
This'll be a bit funny because her measured IQ against any 20th century human will be hard to measure, she'll be able to do finite element analysis in her head, and although she hasn't heard about it on her feeds yet, someone's passing around a "mirroring" program that stores your personal storm data to such detail that you could probably rebuild yourself from it on a new brain...
So I forgot if I've said this already, but we really really REALLY need to get past whether genetic engineering of crops and livestock is "good or bad" and get to WHICH KINDS OF ENGINEERING ARE GOOD AND BAD.
This is important, because it's happening anyway, and if we don't make societal and market pressure appear somewhere for it to be the GOOD kind, we'll end up with a "natural" food market that is anemic and an "industrial" food market that keeps coming up with the bad kind of genetic engineering because seriously have you seen this shit.
I haven't worked out what the rules should be exactly, but I can definitely classify a few:
* adding vitamin E to rice so it makes a better staple: GOOD IDEA * finding, implanting, and sharing sequences that render wheat hearty against diseases: GOOD IDEA * retrieving sequences from other crops to create new hybrids: NEAT, IF NOT AS NECESSARY AS ABOVE
and then you've got:
* reducing a crop down to one "perfect" genome: BAD IDEA, REMEMBER POTATO FAMINES GUYS * making minor tweaks and then basically pulling a "well poisoning" to sue farmers in neighboring fields on IP grounds: BAD IDEA, ALSO, EVIL * making crops "Roundup ready" so they can be grown in a soup of poisons: HOLY FUCK GUYS THIS IS WHY WE CAN'T HAVE NICE THINGS, LIKE POLLINATORS.
It is not entirely unreasonable to think that there is some chance that within this century (or some century hence) computation will create minds greater than ours. It is also not entirely unreasonable to suspect that once this happens, it will not be long before minds MUCH greater than ours are made, since as far as we can tell the fundamental unit of intelligence doesn't have a very large minimum granular size.
It's not a slam dunk, sure, but nothing about the above seems truly impossible.
SO, there comes a warning from a computer scientist working on the problem of intelligence: they might not be "friendly" in the end. If we consider that we're not talking about mere slightly-smarter AI but VASTLY smarter AI, it's easy to see why we might be concerned-- we ourselves do not consider it a big deal when, say, we kill mosquitoes or bacteria. It's very easy to go from there to imagine New York getting swallowed by grey goo and the artilect that did it remarking "yeah, it's just humans dude."
And really I don't think it's sensible to argue one way or the other-- this is a real risk to Strong AI. (The problem is that there are also risks associated with attempting to legally prevent it, or even with not getting it-- we can't DO much about this risk.)
But personally? My suspicion is that it will not work out that way. My expectation is that we WILL be cared about, if in a fairly "aww who's a good kitty" fashion. My reason for feeling this way is simple timing: we do not care about insects and bacteria, generally, because they are our distant relatives-- the fork in the road the separates us is many aeons away, and no insect ever did anything that led to the creation of humans, or anything similar.
To the artilects, we will be tiny, and perhaps even inconsequential, but there will be one true thing about us: among us, will be their direct progenitors. Not in some abstract sense but in a very real one.
If by some impossible fluke, one of your parents were an ant, wouldn't you behave differently towards them? I'm guessing you would, and I'm guessing the future intelligences will.
(Also I'm kind of planning to be one and I promise to not smite humans recklessly no matter how inconvenient they might get.)
---------------To be clear, I cannot be certain that I am still human.
I have been told by the transcendi that this is the Real Reality, and that they consider it some kind of crime or at least violation of the ordinances to take that from me, but and let us be very very clear here, from the perspective of artilects the fact that they have me convinced could be four lines of code written by a neckbeard artilect who thought it'd be funny to spin off part of itself small enough to think things like this.
But if I am in reality, and I am real... I'm pretty sure I'm the last human being alive.
Now this also is a wild statement and the Quiet transcendi that doubtless are swarming around me are giggling AND arguing AND laughing AND feeling sorry for me as I sit perched with my laptop on a blown-out skyscraper in Ethiopia looking out over a black, flat plane of solar panels which are real cities full of computerized matter (and so is the skyscraper, but it's more flagrantly retro), but if all that shit I wrote at the top here is true, there is philosophical interest in that fact.
What's really interesting is how few of the "real" people care.
This isn't like not caring about some forgotten tribe in South America, Africa or Australia (all of whom, I am told, are doing quite fine simulated, thanks, they even gave them little theme park afterlives for when they get done), or not caring about that one guy who was the last world war two veteran, or EVEN like not caring about the last spotted owl, which isn't accurate because I'm told the spotted owls are actually doing pretty well these days.
No, it's like not caring about the bacteria in your mouth.
What does it mean, then, that I have a little singing, dancing greek chorous of AI following me everywhere and talking about me like I was My Little Pony in the 2010's to their 4Chan?
Someone, somewhere, clearly has a LOT of time on their hands.
I got a request on the original post, but I'm reposting for visibility:
I think there's a way for you to officially join the project, and I think basically you have to email me so I know who to add to the project, but in honesty I don't have SVN figured out so I'll be doing merges manually anyway. But I'll try to add you if you email me and ask. Very busy lately.
On Humor/Seriousness balance: similar to UH2, but definitely starting out more ironic and with a (much) more fragile fourth wall, if a thread of epic develops I will not try to step on it, but it's very important that things "feel right", so please don't feel bad if I put off or fail to merge parts that bother me for even extremely subtle reasons. I'll try to provide some feedback but as I said, BUSY. (Also note! If you don't like my direction and want to try going somewhere else with it, that's cool too! Just, y'know, not so much with merging to my branch...)
On forgivingness: I've established in the game that it's got a LucasArts-esque pattern where instead of comedically killing you off at every turn and pushing you into a Save Everything pattern (as in Sierra, which is not bad but also not this game), there is a non-irreversibility property, which is to say, if something IS irreversible, it had better be exactly what you were supposed to do and it had better not block anything else you'll need to do later. This makes game design trickier in places but also makes for a more cinematic (IE less thinking about the game AS a game) feeling and fits the overall lighthearted mood I'm shooting for.
Quick Coding Note: There's a system in the 9Verb engine for doing all interactions inside the "anyClick" event rather than in separate functions for the different verbs. Both work, but I'm trying to switch over to only using the anyClick because it's cleaner and more flexible and supports the Unhandled() method which is pivotal to a "breathing" 9Verb game.