Twitter: @twitchard Github: github.com/twitter
Specifically, they are the immediate future between now and the end of the meetup, because Purescript and Alexa are the subjects of my talk.
They are significant to the future of software beyond just this talk.
The research firm Gartner thinks that by 2020, 75% of US households will contain a "smart speaker" like the Amazon Echo. Another research firm thinks only 55% by 2021. Whatever, it's trending. Amazon is the market leader right now -- in 2017, Amazon was estimated to be about 70% of the market, while Google was 24%.
Apple has the HomePod Harman Kardon makes the "invoke" speaker, with Microsoft's Cortana. Facebook is working on a video-enabled smart speaker Samsung is working on a smart speaker that's supposed to come out this year. There's also an open source voice assistant "Mycroft".
Not as big a market as say, smartphones obviously. But I think smart speakers will get better, people will become more used to interacting with them, and more products will integrate with them. Right now, voice is still somewhat of a novelty. I don't think that will last for long. In a couple of years, I think most consumer-facing software companies will be building experiences for voice.
There are two reasons why that excites me:
Talking to your computer is super cool. It's like Star Trek. You can even change the wake word to "computer" if you want to. I've always had conversations with my computer, especially while trying to solve a hard or frustrating problem. But now, talking to your computer can be a normal thing.
Accessibility. I think the voice ecosystem is really exciting in terms of accessibility. For most software, I feel like accessibility, if it isn't completely neglected, becomes an afterthought. In my experience, we web developers barely have the will to support good experiences for people using older browser versions than we use, let alone people who are using screen readers. With voice, though, visually impaired folks get largely a first-class experience.
And it's not just greater accessibility for the visually impaired. People with physical disabilities that make it difficult or inconvenient to sit down at a computer or flip around on a smart phone -- or people who just aren't particularly computer literate -- it can be a lot more natural to accomplish something via a conversation with Alexa.
I think the main qualm people have about is the privacy issue the smart speaker life is that it feels like it's "always listening," and there's just something that feels wrong about having a device connected to Amazon that is constantly recording audio. According to Amazon, they don't record anything you say unless Alexa thinks it hears the wake word. But if Alexa does think she hear the word, Amazon keeps the recordings and they can be subpoenaed by law enforcement. For me, the convenience and entertainment value outweighs the privacy concern. But from now on I won't be conspiring to commit a crime with anybody named Alexa.
I've made the case for why I think Alexa is the future. But why Purescript? Do I honestly think that Purescript is the future?
Isn't Purescript a close relative to Haskell -- whose motto is "avoid success at all costs"?
Isn't Purescript the language whose Wikipedia article keeps getting deleted because the powers that be have determined it isn't ‘notable' enough to deserve a Wikipedia article all to itself?
Yes, both of these things are true -- but I believe that Purescript is the future for one reason:
The New York Purescript Meetup
You are brilliant individuals, each and every last one of you. I can tell just by standing here, being in your presense. And I sense that before each of lies a great destiny.
For some of you, your destiny is to be inspired by this talk. While I'm telling you about Alexa Skills, your imagination will alight and your brain will produce an idea for the greatest Alexa skill of all time. After the meetup is over, the first thing you will type into your terminal is git clone purescript-alexa-template
, and you will make your idea into a reality. You will implement the greatest Alexa skill of all time in Purescript, and because of your enormous success, Purescript will become the language of choice for implementing Alexa skills, and thus Purescript itself will become the language of the future. Maybe my talk should have been titled, using Alexa to help Purescript take over the world.
For others of you, your destiny is to remain completely unmoved by this talk. While I am up here gabbing on about Alexa, you'll be zoning out and thinking about how much you'd rather be hearing about something else. And then your imagination will alight. The idea for that something else will come to you, and you will concoct the outline of the greatest Purescript talk of all time. After the meetup is over, the first thing you will do is contact Dustin and sign up to give the talk at the next Purescript meetup. Your talk will inspire dozens and dozens of Purescript developers to new heights of functional programming excellence, and change the face of software forever.
Whatever you destiny, the future starts now. Together, we will bring an end to the dark era of software, the era of mutability, partial functions and run-time exceptions, and we will bring in a golden age of strong types, purity, peace, and monads.
Disclaimer: Everything Richard says represents his own views and not those of his employer, friends, or anybody else associated with him in any way.
type Session = Maybe SessionRec type SessionRec = { secretWord :: String , guesses :: Array String , status :: Status } data Status = Normal | GivingUp | Loading -- GivingUp = we just asked, are you sure you want to give up? -- Loading = we just asked, do you want to pick up from where we left off last time?
For example, the sample utterances for secret word's "GuessIntent" are:
samples:
[ "Guess {Word}"
, "I guess {Word}"
, "My guess is {Word}"
, "How about {Word}"
]
For example, in secret word, the {Word} in I guess {Word}
represents a slot.
SSML let's you make Alexa whisper.
<speak>
I want to tell you a secret.
<amazon:effect name="whispered">I am not a real human.</amazon:effect>.
Can you believe it?
</speak>
You can also embed mp3s shorter than 90 seconds.
<speak>
Welcome to Car-Fu.
<audio src="https://carfu.com/audio/carfu-welcome.mp3" />
You can order a ride, or request a fare estimate.
Which will it be?
</speak>
Deploying your skill is as easy as
ask deploy
data Output
= JustCard Card
| JustSpeech
{ speech :: Speech
, reprompt :: Maybe Speech
}
| SpeechAndCard
{ speech :: Speech
, reprompt :: Maybe Speech
, card :: Card
}
hasReprompt :: Output -> Boolean
hasReprompt JustSpeech { reprompt : Just _ } = true
hasReprompt SpeechAndCard { reprompt : Just _ } = true
hasReprompt _ = false
An Alexa skill is a state machine. The ‘Session' describes the state and the ‘intents' and ‘slots' describe potential actions, that may cause state transitions.
runSkill (ErrorInput err) Nothing = errorAndExit
runSkill (ErrorInput err) sess = errorAndContinue sess
runSkill _ Nothing = beginOrRestoreGame
runSkill Launch _ = beginOrRestoreGame
runSkill Stop _ = exit
runSkill Cancel _ = exit
runSkill No (Just sess@{status : Loading}) = beginNewGame
runSkill Yes (Just sess@{status : Loading}) = restoreGame sess
runSkill _ (Just sess@{status : Loading}) = errorAndContinue sess
runSkill Yes (Just sess@{status : GivingUp}) = playerLoses sess
runSkill No (Just sess@{status : GivingUp}) = promptForGuess sess
runSkill _ (Just sess@{status : GivingUp}) = errorAndContinue sess
runSkill Yes (Just sess) = errorAndContinue sess
runSkill No (Just sess) = errorAndContinue sess
runSkill Help (Just sess) = readInstructionsAndContinue sess
runSkill (Guess guess) (Just sess) = handleGuess guess sess
runSkill Thinking (Just sess) = handleThinking sess
runSkill GiveUp (Just sess) = confirmGivingUp sess
runSkill SessionEnded (Just sess) = persistSession sess
purescript-alexa
and purescript-alexa-template
exports.handler = function (event, context, callback) {...}
purescript-alexa-template
is much more useful