A Sunny Outlook

Some years ago, I wrote a chapter in a book called Farming Futures. The book is about social entrepreneurship in India, and my chapter was about a firm called Skymet. Skymet is a private weather forecasting firm based partially out of Pune and partially out of Noida (along with other office in other locations). But researching for the chapter got me interested in both how the art and science of weather forecasting had developed over time, and where it is headed next.

Only trivia enthusiasts are likely to remember the name of the captain on whose ship Charles Darwin made his historic voyage that was to result in the publication of “On the Origin of Species”. Fewer still will remember that Admiral Robert FitzRoy committed suicide. The true tragedy, however, is that it is almost certainly his lifelong dedication to predicting the weather that caused him to take his own life.
We have, in the decades and centuries since, come a long way. Weather forecasting today is far more advanced than it was in Admiral FitzRoy’s day. Britain, for example, Admiral FitzRoy’s own nation, today has an annual budget of more than 80 million GBP to run its meteorological department. It has an accuracy of around 95% when it comes to forecasting temperatures, and an accuracy of around 75% when it comes to forecasting rain – anybody who is even remotely familiar with Britain’s notoriously fickle weather would know that this is no small achievement.

Farming Futures: Emerging Social Enterprises in India

Those numbers that I cited, and the tragic story of Admiral FitzRoy, come from a lovely book called The Weather Experiment.


But I first read about weather, and the difficulties associated with forecasting it in a book called Chaos, by James Gleick:

Lorenz enjoyed weather—by no means a prerequisite for a research meteorologist. He savored its changeability. He appreciated the patterns that come and go in the atmosphere, families of eddies and cyclones, always obeying mathematical rules, yet never repeating themselves. When he looked at clouds, he thought he saw a kind of structure in them. Once he had feared that studying the science of weather would be like prying a jack-in–the-box apart with a screwdriver. Now he wondered whether science would
be able to penetrate the magic at all. Weather had a flavor that could not be expressed by talking about averages. The daily high temperature in Cambridge, Massachusetts, averages 75 degrees in June. The number of rainy days in Riyadh, Saudi Arabia, averages ten a year. Those were statistics. The essence was the way patterns in the atmosphere changed over time…

Ch. 1, The Butterfly Effect, Chaos, by James Gleick

What is the Butterfly Effect, you ask? It gets its own Wikipedia article, have fun reading it.


All of which is a very long way to get around to the write-up we’re going to be talking about today, called After The Storm.

On 29 October 1999, a “Super Cyclone” called Paradip devastated parts of Odisha and the east coast of India. At wind speeds of almost 250 kms per hour, it ravaged through the land, clearing out everything in its path. Fields were left barren, trees uprooted like mere matchsticks, entire towns devastated. More than 10,000 people lost their lives.
Fast forward to two decades later. In 2020, bang in the middle of the Covid-19 pandemic, another cyclone—known as Amphan—speeds through the Bay of Bengal. It crashes into the land like Paradip did in 1999. Like before, many homes are destroyed and structures uprooted. But one thing is different: this time’s death toll is 98. That’s a 100 times lower than 1999’s casualties.
What made this difference possible? Simply put: better, timely and more accurate weather prediction.

https://fiftytwo.in/paradigm-shift/after-the-storm/

We’ve made remarkable progress since the days of Admiral FitzRoy. Predicting the weather is still, admittedly, a very difficult and very expensive thing, as this lovely little write-up makes clear, but it is also something we’re much better at these days. We have better instruments, better computing power, better mathematical and statistical tools to deploy, and the ability to synthesize all of these to come up with much better forecasts – but it’s not perfect, and it’s not, well, good enough.

Those last two words aren’t meant as a criticism or a slight – far from it. The meteorologists themselves feel that is is not good enough:

“It almost becomes like flipping a coin,” Professor Islam says. “The IMD is not to be blamed. They will be very good at predicting the weather three or four days in advance. Beyond that, it cannot be done because there is a fundamental mathematical limitation to these questions.”
“IMD can do another sensor, another satellite, they can maybe improve predictions from two days, to three days. But can they do ten days? There is no evidence. Right now there is no weather forecasting model on the globe. India to Europe to Australia, it doesn’t matter, it’s not there.”

https://fiftytwo.in/paradigm-shift/after-the-storm/

As Professor Islam says, he wants to move from up from being able to forecast the next four to five days, to being able to predict weather over the next ten days. Why? So that communities in the path of a storm have adequate time to move. What could be more important than that when it comes to meteorology.


So what’s the constraint? This is a lovely analogy:

“I give this example to my students,” the professor says, “Look, usually all of science and AI is based on this idea of driving with the rearview mirror. I don’t have an option, so I’m looking into my rearview mirror and driving. I will be fine as long as the road in the front exactly mirrors the rearview. If it doesn’t and I go into a turn? Disastrous accident.”

https://fiftytwo.in/paradigm-shift/after-the-storm/

It’s weird what the human brain will choose to remind you of, but this reminds me, of all things, of a gorilla. That too, a gorilla from a science fiction book:

Amy distinguished past, present, and future—she remembered previous events, and anticipated future promises—but the Project Amy staff had never succeeded in teaching her exact differentiations. She did not, for example, distinguish yesterday from the day before. Whether this reflected a failing in teaching methods or an innate feature of Amy’s conceptual world was an open question. (There was evidence for a conceptual difference.) Amy was particularly perplexed by spatial metaphors for time, such as “that’s behind us” or “that’s coming up.” Her trainers conceived of the past as behind them and the future ahead. But Amy’s behavior seemed to indicate that she conceived of the past as in front of her—because she could see it—and the future behind her— because it was still invisible.

Michael Crichton, Congo

That makes a lot of sense, doesn’t it? And that’s the fundamental problem with any forecasting tool: it necessarily has to be based on what happened in the past, because what else have we got to work with?

And if, as Professor Islam says, the road in the future isn’t exactly like the past, disaster lies ahead.


But Artificial Intelligence and Machine Learning need not be about predicting what forms the storms of the future might take. They can be of help in other ways too!

“It hit us that the damage that happened to the buildings in the poorer communities could have been anticipated very precisely at each building’s level,” Sharma explains. “We could have told in advance which roofs would fly away, and which walls would collapse, which not so. So that’s something we’ve tried to bring into the AI model, so that it can be a predictive model.”

“What we do is, essentially, this: we use satellite imagery or drone imagery and through that, we identify buildings. We identify the material and technology of the building through their roofs as a proxy, and then we simulate a sort of a risk assessment of that particular building, right? We also take the neighbouring context into account. Water bodies, how high or low the land is, what kind of trees are around it, what other buildings are around it.”

The team at SEEDS and many others like it are more concerned about the micro-impact that weather events will have. Sharma is interested in the specifics of how long a building made from a certain material will be able to withstand the force of a cyclone. This is an advanced level of interpretation we’re talking about. It’s creative, important and life-saving as well.

https://fiftytwo.in/paradigm-shift/after-the-storm/

In other words, we may not know the intensity of a particular storm, and exactly when and where it will hit. But given assumptions of the intensity of a storm, can we predict which buildings will be able to withstand a given storm and which ones won’t?

This is, as a friend of mine to whom I forwarded this little snippet said, is very cool.

I agree. Very cool indeed.

And sure, accuracy about weather forecasting may still be a ways away, and may perhaps lie forever beyond our abilities. But science, mathematics and statistics might still be able to help us in other ways, and that (to me) still counts as progress.

And that is why, all things considered, I’d say that when it comes to the future of weather forecasting, sunny days are ahead.


In case you haven’t already, please do subscribe to fiftytwo.in

Excellent, excellent stories, and the one I have covered today is also available in podcast form, narrated by Harsha Bhogle, no less. All their other stories are wroth reading too, and I hope you have as much fun going through them as I have.

AI/ML: Some Thoughts

This is a true story, but I’ll (of course) anonymize the name of the educational institute and the student concerned:

One of the semester end examinations conducted during the pandemic at an educational institute had an error. Students asked about the error, and since the professor who had designed the paper was not available, another professor was asked what could be done. Said professor copied the text of the question and searched for it online, in the hope that the question (or a variant thereof) had been sourced online.

Alas, that didn’t work, but a related discovery was made. A student writing that same question paper had copied the question, and put it up for folks online to solve. It hadn’t been solved yet, but the fact that all of this could happen so quickly was mind-boggling.

The kicker? The student in question had not bothered to remain anonymous. Their name had been appended with the question.

Welcome to learning and examinations in the time of Coviid-19.


I have often joked in my classes in this past decade that it is only a matter of time before professors outsource the design of the question paper to freelance websites online – and students outsource the writing of the submission online. And who knows, it may end up being the same freelancer doing both of these “projects”.

All of which is a very roundabout way to get to thinking about Elicit, videos about which I had put up yesterday.

But let’s begin at the beginning: what is Elicit?

Elicit is a GPT-3 powered research assistant. Elicit helps you classify datasets, brainstorm research questions, and search through publications.

https://www.google.com/search?q=what+is+elicit.org

Which of course begs a follow-up question: what is GPT-3? And if you haven’t discovered GPT-3 yet, well, buckle up for the ride:

GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years. Advances in computational power — and new mathematical techniques — have enabled L.L.M.s of GPT-3’s vintage to ingest far larger data sets than their predecessors, and employ much deeper layers of artificial neurons for their training.
Chances are you have already interacted with a large language model if you’ve ever used an application — like Gmail — that includes an autocomplete feature, gently prompting you with the word ‘‘attend’’ after you type the sentence ‘‘Sadly I won’t be able to….’’ But autocomplete is only the most rudimentary expression of what software like GPT-3 is capable of. It turns out that with enough training data and sufficiently deep neural nets, large language models can display remarkable skill if you ask them not just to fill in the missing word, but also to continue on writing whole paragraphs in the style of the initial prompt.

https://www.nytimes.com/2022/04/15/magazine/ai-language.html

It’s wild, there’s no other way to put it:


So, OK, cool tech. But cool tech without the ability to apply it is less than half of the story. So what might be some applications of GPT-3?

A few months after GPT-3 went online, the OpenAI team discovered that the neural net had developed surprisingly effective skills at writing computer software, even though the training data had not deliberately included examples of code. It turned out that the web is filled with countless pages that include examples of computer programming, accompanied by descriptions of what the code is designed to do; from those elemental clues, GPT-3 effectively taught itself how to program. (OpenAI refined those embryonic coding skills with more targeted training, and now offers an interface called Codex that generates structured code in a dozen programming languages in response to natural-language instructions.)

https://www.nytimes.com/2022/04/15/magazine/ai-language.html

For example:

(Before we proceed, assuming it is not behind a paywall, please read the entire article from the NYT.)


But about a week ago or so, I first heard about Elicit.org:

Watch the video, play around with the tool once you register (it’s free) and if you are at all involved with academia, reflect on how much has changed, and how much more is likely to change in the time to come.

But there are things to worry about, of course. An excellent place to begin is with this essay by Emily M. Blender, on Medium. It’s a great essay, and deserves to be read in full. Here’s one relevant extract:

There is a talk I’ve given a couple of times now (first at the University of Edinburgh in August 2021) titled “Meaning making with artificial interlocutors and risks of language technology”. I end that talk by reminding the audience to not be too impressed, and to remember:
Just because that text seems coherent doesn’t mean the model behind it has understood anything or is trustworthy
Just because that answer was correct doesn’t mean the next one will be
When a computer seems to “speak our language”, we’re actually the ones doing all of the work

https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd

I haven’t seen the talk at the University of Edinburgh referred to in the extract, but it’s on my to-watch list. Here is the link, if you’re interested.

And here’s a Twitter thread by Emily M. Blender about Elicit.org specifically:


In response to this critique and other feedback, Elicit.org have come up with an explainer of sorts about how to use Elicit.org responsibly:

https://ought.org/updates/2022-04-25-responsibility

Before we proceed, I hope aficionados of statistics have noted the null hypothesis problem (which error would you rather avoid) in the last sentence of pt. 1 in that clipping above!


So all that being said, what do I think about GPT3 in general and elicit.org in particular?

I’m a sucker for trying out new things, especially from the world of tech. Innocent until proven guilty is a good maxim for approaching many things in life, and to me, so also with new tech. I’m gobsmacked to see tools like GPT3 and DallE2, and their applications to new tasks is amazing to see.

But that being said, there is a lot to think about, be wary of and guard against. I’m happy to keep an open mind and try these amazing technologies out, while keeping a close eye on what thoughtful critics have to say.

Which is exactly what I plan to do!

And for a person with a plan such as mine, what a time to be alive, no?

Have you tried Elicit.org yet?

Video 1:

And Video 2:

Supply and Demand, Complements and Substitutes and Dalle-E 2

Before we begin, and in case some of you were wondering:

Early last year, San Francisco-based artificial intelligence company OpenAI launched an AI system that could generate a realistic image from the description of the scene or object and called it DALL.E. The text-to-image generator’s name was a portmanteau coined after combining the artist Salvador Dali and the robot WALL.E from the Pixar film of the same name.

https://analyticsindiamag.com/whats-the-big-deal-about-dall-e-2/

Dall-E 2 is amazing. There are ethical issues and considerations, sure, but the output form this AI system is stunning:

A rabbit detective sitting on a park bench and reading a newspaper in a Victorian setting (Source)

And just in case it isn’t clear yet, no such painting/drawing/art existed until this very sentence, the one that is the caption, was fed to the AI. And it is the AI that “created’ this image. Go through the entire thread.


This has led, as might be expected, to a lot of wondering about whether artists are going to be out of a job, and the threats of AI to humanity at large. I do not know enough to be able to offer an opinion one way or the other where the latter is concerned, but I do, as an economist, have some points to make about the former.

These thoughts were inspired by reading Ben Thompson’s latest (freely available) essay on Dall-E 2, titled “DALL-E, the Metaverse, and Zero Marginal Content“. He excerpts from the OpenAI website in his essay, and this sentence stood out:

DALL-E is an example of how imaginative humans and clever systems can work together to make new things, amplifying our creative potential.

https://openai.com/dall-e-2/

And that begs an age-old question where economists are concerned: is technology a complement to human effort, or a substitute for it? The creators of Dall-E 2 seem to agree with Steve Jobs, and think that the AI is very much a complement to human ingenuity, and not a substitute for it.

I’m not so sure myself. For example: is Coursera for Campus a complement to my teaching or a substitute for it? There are many factors that will decide the answer to this question, including quality, price and convenience among others, and complementarity today may well end up being substitutability tomorrow. If this isn’t clear, think about it this way: cars and drivers were complementary goods for decades, but today, is a self-driving car a complement or a substitute where a driver is concerned?

But for the moment, I agree: this is an exciting new way to generate content, and is likely to work best when used as a complement by artists. Note that this is based on what I’ve seen and read – I have not myself had a chance to use or play around with Dall-E 2.


The title of today’s blog post is about substitutes and complements, which we just finished talking about in the previous section, but it also includes references to demand and supply. What about demand and supply?

Well, Ben Thompson talks about ways to think about social media firms today. He asks us to think about Facebook for example, and asks us to reflect upon where the demand and the supply for Facebook as a service comes from.

Here’s my understanding, for having read Ben Thompson’s essay: Facebook’s demand comes from folks like you and I wanting to find out what, well, folks like you and I are up to. What are our friends, our neighbors, our colleagues and our acquaintances up to? What are their friends, neighbors, colleagues and acquaintances up to? That’s the demand.

What about the supply? Well, that’s what makes Facebook such a revolutionary company – or at least, made it revolutionary back then. The supply, as it turns out, also came from folks like you and I. We were (and are) each others friends, neighbors, colleagues and acquaintances. Our News Feed was mostly driven by us in terms of demand, and driven by us in terms of supply. Augmented by related stuff, and by our likes and dislikes, and news sources we follow and all that, but demand and supply comes from our own networks.

TikTok, Thompson says, is also a social network, and supply and demand is also user driven, but it’s not people like us that create supply. It is just, well, people. TikTok “learns” what kind of videos we like to see, and the algorithm is optimized for what we like to see, regardless of who has created it.

But neither Facebook nor TikTok are in the business of generating content for us to see. The former, to reiterate, shows us stuff that our network has created or liked, while the latter shows us stuff that it thinks we will like, regardless of who has created it.

But how long, Ben Thompson’s essay asks, before AI figures out how to create not just pictures, but entire videos. And when I say videos, not just deep fakes, which already exist, but eerily accurate videos with depth, walkthroughs, nuance, shifting timelines and all the rest of it.

Sounds far-fetched?

Well, I remember taking an hour to download just one song twenty years ago, and I can now stream any song in the world on demand. And soon (already?) I will be able to “create” any song that I like, by specifying mood, genre, and the kind of lyrics I want.

How long before I can ask AI to create a movie just for me? Or just me and my wife? Or a cartoon flick involving me and my daughter? How long, in other words, before my family’s demand for entertainment is created by an AI, and the supply comes from that AI being able to tap into our personal photo/video collection and make up a movie involving us as cartoon characters?

Millions of households, cosily ensconced in our homes on Saturday night, watching movies involving us in whatever scenario we like. For homework, read The Secret Life of Walter Mitty by Thurber (the short story, please, not the movie!), Snowcrash by Neal Stephenson, and The Seven Basic Plots by Baker.


There are many tantalizing questions that arise from thinking about this, and I’m sure some have struck you too. But I don’t want to get into any of them right now.

Today’s blog post has a very specific point: it doesn’t matter how complicated the issue at hand is. Simple concepts and principles can go a very long way in helping you frame the relevant questions required for analysis. Answering them won’t be easy, as in this case, but hey, asking (some of) the right questions is a great place to start.

Dall.E 2

It’s been a week or so since I’ve seen this, and I remain gobsmacked

Etc: Links for 11th October, 2019

  1. Celebtrating Rafa Nadal. That this piece is written 14(!) years after Nadal won his first Grand Slam is beyond remarkable. I am, for the record (and will forever be) a Federer acolyte, but I gave up on the who-is-better battle long, long ago. I am just grateful to be a tennis fan alive in this era.
    ..
    ..
    “”Under different circumstances, his performance would have been more than good enough to win the tournament. He had the bad luck of facing Nadal, one of the sport’s greatest champions, on a night when Nadal simply refused to lose.”
    ..
    ..
  2. A useful list for lazy weekends: the signature film of every city. The excerpt below is about Washington D.C. Pair this recommendation with an app called JustWatch, which is worth it’s proverbial weight in gold.
    ..
    ..
    “If you want to get a sense of a city in a movie, following around a couple of reporters for a major paper is a damn good way to evoke the mood of the metropolis. Watching Bob Woodward and Carl Bernstein relentlessly prowl the streets and restaurants and parking garages of the nation’s capital in pursuit of a truth that will ultimately bring down the President of the United States is as D.C., and American, as it gets. Honorable mention: Ashby’s “Being There,” Friedkin’s “The Exorcist”, Brooks’ “Broadcast News,” the Coens’ “Burn After Reading,” Schumacher’s “D.C. Cab” and countless others.”
    ..
    ..
  3. Humanity is a kind of ‘biological boot loader’ for AI, says Elon Musk.
    ..
    ..
    “People don’t realize we are already a cyborg. Because we are so well integrated with our phones and our computers. The phone is almost like an extension of yourself. If you forget your phone, it’s like a missing limb. But the bandwidth, the communication bandwidth to the phone is very low, especially input. So in fact, input bandwidth to computers has actually gone down, because typing with two thumbs, as opposed to 10 fingers, is a big reduction in bandwidth.”
    ..
    ..
  4. The 100 Best Albums of the 21st Century. I am not qualified to pass opinion, but my commute is, as they say, sorted.
    ..
    ..
  5. A book recommendation via MR. In Other Rooms, Other Wonders, by Daniyal Mueenuddin. I have purchased it, but haven’t read it yet. Probably (and hopefully) on the Thailand trip.

Etc: Links for 13th September, 2019

  1. The filmy divide in India.
    ..
    ..
  2. Man or woman?
    ..
    ..
  3. “Bau once told Rahul Bhattacharya, in an encounter for the ages from the book Pundits from Pakistan, that the action was “all artificial”, part of a carefully created persona built to defeat batsmen. It wasn’t the bowler or the ball that beat batsmen, it was this persona. They say that about Shane Warne too, about how batsmen were dead just from the theatre of Warne at the top of his mark, but man, did it ring true with Bau.”
    ..
    ..
    Osman Samiuddin on Abdul Qadir.
    ..
    ..
  4. “When we seek Western fads at Indian levels of income, the economic cost of our perceived moral rectitude will be borne by the poor.”
    ..
    ..
    On opportunity costs.
    ..
    ..
  5. On food, history, India and Asia.

Tech: Links for 8th August, 2019

Learning without technology in the twenty-first century is, in my opinion, an immense waste of available resources. That being said, here’s a list of five specific things, all created by Google, that may help you learn better.

As always please let me know how I can add to the list.

  1. Google Classroom: whether a student or an educator, this is a technology that is immensely helpful for setting up links related to a classroom. Whether you have an institutional ID, or a plain vanilla Gmail account, you can use Google Classroom to set up an online learning environment.
    ..
    ..
  2. Google Docs: Is useful and well known anyways, but I remain convinced that students could do a lot better with Google Docs as a collaborative note taking tool.
    ..
    ..
  3. Google Keep: Is a great place to, well, keep stuff when doing online research. Integrates well with Google Docs as well.
    ..
    ..
  4. The Learn Digital With Google program (it’s free!)
    ..
    ..
  5. Google AI Education. I have come across this literally only today, so can’t vouch for it entirely – but sure seems interesting. This one in particular caught my eye.

Tech: Links for 16th July, 2019

  1. “On July 3, I challenged readers of my Big Internet Math-Off pitch to try to find the way to divide 24 muffins among 25 people that makes the smallest piece as large as possible. ”
    ..
    ..
    Click on this link to get a sense of a truly interesting math problem, and how to think about them.
    ..
    ..
  2. “Sitting in a hotel lobby in Tangier, Morocco, Charity Wayua laughs as she recounts her journey to the city for a conference on technology and innovation. After starting her trip in Nairobi, Kenya, where she leads one of IBM’s two research centers in Africa, she had to fly past her destination for a layover in Dubai, double back to Casablanca, and then take a three-and-a-half-hour drive to Tangier. What would have been a seven- to eight-hour direct flight was instead a nearly 24-hour odyssey. This is not unusual, she says.”
    ..
    ..
    An interesting set of links contained in this link, which speaks about how AI is being used in Africa – and you also get a sense about the opportunities and limitations in Africa.
    ..
    ..
  3. “Then there’s Matthew Porter. He requires only a camera, model cars, and a bit of Photoshop to send muscle cars flying in his new book, The Heights. It’s a resourceful, low-tech homage to some of the most iconic, memorable stunts in the car-chase genre. “There’s just nothing more visceral than a car in the air,” he says. “It’s aspirational and romantic.””
    ..
    ..
    These kind of tech articles are the most fun to read. Tinkering around can yield surprisingly good (and fun!) results.
    ..
    ..
  4. “Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.”
    ..
    ..
    That is from a lovely essay by Isaac Asimov on creativity.
    ..
    ..
  5. “A group of researchers have now used this technique to munch through 3.3 million scientific abstracts published between 1922 and 2018 in journals that would likely contain materials science research. The resulting word relationships captured fundamental knowledge within the field, including the structure of the periodic table and the way chemicals’ structures relate to their properties. The paper was published in Nature last week.”
    ..
    ..
    A very short, but no less delightful read on some of the more mind boggling applications of AI.

Tech: Links for 9th July, 2019

  1. “In it, astronaut Sally Jansen has been working to come to grips with a Mars mission that went disastrously wrong, and NASA ended its crewed missions into space. But while she’s trying to move on, scientists detect an object designated 2I/2044 D1 entering our solar system, and when it begins to slow down, they realize that it’s an alien artifact. Jansen is called in to try and intercept the object and figure out what is behind it before it reaches Earth.”
    ..
    ..
    Science Fiction is a great way to learn a lot and have a lot of fun while doing so, and for that reason, I thoroughly enjoyed learning about the premise of this book. In similar vein, I recently (and finally) finished The Three Body Problem, and can heartily recommend it.
    ..
    ..
  2. “The camera was loaded with machine vision algorithms trained by Hamm himself. They identified whether Metric was coming or going and whether he had prey in his mouth. If the answer was “yes,” the cat flap would lock for 15 minutes and Hamm would get a text. (In a nice flourish, the system also sends a donation, or “blood money” as Hamm calls it, to the National Audubon Society, which protects the birds cats love to kill.)”
    ..
    ..
    There are many people who bandy about the word AI these days, but this very short read (and within it, a very entertaining video) helps you understand how it could by applied in myriad ways.
    ..
    ..
  3. “LightSail 2 is more ambitious and will actually try to maneuver through space, and even boost itself into different orbits using sunlight. The new mission’s mission control website will let people around the world follow along, including the 23,331 people who contributed to the project’s Kickstarter campaign, which raised $1,241,615 for the spacecraft.”
    ..
    ..
    A third link from the same website (either The Verge is on fire, or I am being lazy today), but the best of the lot, in my opinion. It is now possible to crowdfund a satellite launch that contains a sail – and you can now watch your investment in space as it flies above your head. What a time to be alive.
    ..
    ..
  4. “But while Tufte’s concerns are not limited to charts, he has spent a lifetime thinking through what he called the “perennial” problem of how to represent a multidimensional world in the two dimensions of the page or screen. At the end of the day, he pulled out a first edition of Galileo Galilei to show how the great minds of the past had grappled with the same issues. He rhapsodized over Galileo’s tiny, in-line sketches of Saturn, which clearly inspired his own advocacy of “sparklines” (tiny charts embedded in text at the same size as the text), as well as some beautifully precise illustrations of sunspots.”
    ..
    ..
    Data visualization, medical visits, Galileo and sparklines. As they say, self-recommending.
    ..
    ..
  5. “And with 92 percent of future jobs globally requiring digital skills, there’s a focus on helping students develop skills for careers that don’t yet exist. Last year, Sweden declared coding a core subject to be taught from the first year of primary school. And there is an appetite for these skills among students, too, with 85 percent of Brazilians from 16-23 indicating that they want to work in the technology sector. ”
    ..
    ..
    Well, there’s a thought – I refer to Sweden’s decision. One, complements, not substitutes. Two, the links are worth following in this link – this is a subject very close to my heart.