Remember Google Wave? Welcome to the Future…
I’ve recently came to the conclusion that there are a couple really interesting ideas lately that others are not talking about but deserve attention. There are lots of little steps that need to be taken and code that has to be written to get there. But with improvements to SEA, AXE getting started, etc. we are pretty close. Then there are odd side things, like:
We need to sanitize and deterministically canonize HTML end-to-end for user input. This one, for example, is based off my old normalize library. It will let us attach MarkDown, Medium, or custom editors into dApps yet make sure the input can be synced in GUN and show up on the other side in different browsers/devices the same way as it was edited/shown/previewed on yours.
Interested in helping with this piece? Please let us know!
It is important to build a meta-editor at first - it might look just like a Twitter or Facebook or SMS input but if you start typing longer, it automatically expands into a Medium-like article/blogging editor. It would start behaving like a gDoc collaborative editor also, if you want others to help.
One cool piece, that is inspired by the guy who built our Docs system, is that I also want hyper-linkable/inlineable articles, which is possible because of GUN’s graph data structure. For my scientific brain, I want to have small standalone arguments or explanations of an idea that I put on its own page. But then I might be writing another article, and want to include that “idea” so I should be able to inline paste it in. But whenever the standalone version gets updated, so does the other article, since it is using GUN.
We have a lot of collaborative, interactive educational pieces the Cartoon Cryptography, Sorting Story, Todo Dapp, Trusting Strangers, Distributed Matters, etc. and I want to make more of these pieces, and make it easier to make them, and easier/reusable to interlink and reference them.
But wouldn’t it be nice to have your blog posts or tweets also in audio format?
This meta-editor should be expandable to record audio. Bunch of interesting things here:
Beyond just being able to record audio in the meta-editor, for podcasting, what if while you are writing your tweet that has expanded into a long-form Medium-like article, that you are gDoc sharing with your friends… what if auto started a audio call with anybody who jumped in to help? Now you can talk with somebody on the opposite side of the world while you are collaborating on writing the article!
If these snippets also have accessible audio segments to them, then it would also be possible to “remix” podcasts in the same way that earlier, with text, I suggested you could inline insert other standalone ideas/arguments into your article. These would also realtime update, and now you get free audio in your new articles that inlines old portions. How cool would it be able to highlight some text, read it, and the system automatically know that is annotated?
And don’t get me even started on my old Accelsor (2011!) ideas around how the meta-editor could also help draw cartoons/images/graphics or be like Google Slides. Or let you add a meta-editor into your meta-editor (super meta!) where in your “article” or “post” you’ve added has the ability for people to write a reply/comment to your “article” or “post” in the same way you might insert an image into your blog. Now you have built your own commenting system!
Another thing I want to use the podcast/audio/accessibility thing for is to create dynamic music. I have an old demo of a synthesizer that generates classic/acoustic piano sounds while you write English so you wind up playing a song by writing a tweet (or try this one by somebody else).
Or cooler, one of the next things I want to do…. and why I’m looking for an Web Audio API volunteer (think you can help? Talk to us!) is to hum into the meta-editor podcast system, and then ask it to convert the hums in the waveform into notes, that I can then apply different synths to (orchestra, piano, guitar, drums, etc.) once the waveform is converted to notes, those notes should be editable — maybe I made a mistake in my hum, so I can delete that, or change the note.
This has other even neater ideas: The hum-converted-to-notes now lets you adjust the timing, you could draw a line/arc/something to adjust the notes-per-second, so it even if your hum wasn’t quarter-second consistent, your synth generated version of the notes is, and or it could slowly speed up or slow down, based on the arc you draw.
Now what if we apply this hum-to-editable-notes idea, and apply it back to podcast recording? What if you recorded while selecting text, so the system knows the for-blind-people audio-accessible portion of the text… but you messed up? You said “um” or something. Couldn’t the system detect the peak and valleys in the waveform, like it did with a hum, to let you go back and delete the “um” like you could delete a note? Rather than just clipping the audio, it could “blur” the transition from before and after, so the deleted word doesn’t even sound like it was clipped, it plays smoothly, etc.
But, first things first we need to normalize HTML deterministically, to guarantee the structure of the collaborative edits are rendered the same on different devices/browsers.
Who wants to help write a deterministic HTML sanitizer? :)
Also want to see a future like this? Then please help spread the word, clap this article a bunch, or retweet!
If you found this article interesting, you will also enjoy reading about the Future of Social Networking!
Create your free account to unlock your custom reading experience.