The AI arms race is affecting minorities, and not in a good way

Unintentional bias in AI has warranted a lot of public criticism recently. However, we often don’t talk enough about the intentional bias that shows up in an alarming amount of fourth wave innovations. One shocking example came to light in late 2019 when leaked documents revealed that the popular social media platform TikTok was doling out “algorithmic punishments for unattractive and impoverished users.” Human moderators were instructed to flag users who were “unattractive, poor, or otherwise undesirable,” at which point the TikTok algorithms would prevent most users from viewing their content. These standards were put in place by the Chinese parent company ByteDance, a corporation that U.S. officials say has significant “ties to the Chinese Communist Party.” Punishing the underprivileged on social media is worrisome in itself. Numerous studies have outlined the damaging effects that unpopularity on social media can have on young people. This censoreship of the “poor” and “ugly” is offensive and harmful enough, but the discrimination does not stop there.

More recently, TikTok has begun blocking statements like “Black Lives Matter” and “Black support” from user bios. TikTok claims that the censorship was a fluke in their algorithms that were created to prevent statements about black hate. Critics are skeptical of this flimsy excuse, stating that phrases like “neo nazi” and “white supremacy” did not trigger the “protective” algorithms. TikTok claims that their algorithms will be updated to address the issue, but many users are skeptical. One comedian and black activist complained that he lost the ability to share certain types of content after calling attention to the platform’s discriminatory algorithms. The obvious intention here is to keep TikTok users from seeing complaints about their damaging AI screening tools. TikTok has harmed individual users by blocking their content, but it also has hampered the world wide movement towards racial justice.

TikTok isn’t the only Chinese tech giant that silences minority voices. WeChat, the popular Chinese messaging service, recently began removing LGBT group chats in an effort to silence a population they see as a threat to China’s authority. While most WeChat users reside in China, around 200 million live outside the country with 19 million in the United States alone. In other words, millions of people throughout the West have been prevented from discussing LGBT issues and topics because Chinese leaders feel threatened by them.

Photo Courtesy of The Reel Network

It is clear that China will use seemingly benign technology to silence progressive political movements that pose a threat to their balance of power. This should serve as a warning for liberal democracies that value individual rights and protection. Outsider technology is actively stifling groups that promote the same. Of course, any corporation in any part of the world can enforce discriminatory polices. The difference between Western companies and those from places like Russia and China is that Western citizens have the ability to punish bad behavior in their own countries. Liberal democracies provide opportunities to sue, cite, and regulate tech companies via their political institutions. Unfortunately, these tools for change are not available across the world. Albeit difficult, Western nations can regulate company activity. There is no legitimate procedure that would allow the West to regulate companies from Eastern countries with different power structures. We can’t effectively sue or target legislation at foreign tech companies. If there will be change, it has to come about through some other means.

When presented with these instances of discrimination in Chinese AI, we might think that the simple solution is to ban such technology from reaching Western shores. Unfortunately, this is a bit more complicated than it may appear. China and Russia have both made it clear that attempts to regulate or ban their largest tech companies will result in economic sanctions at the least. Powerful American companies like Apple are afraid of losing foreign markets, and lobby hard to prevent regulation of foreign tech giants.

Zhang Yiming, CEO and founder of ByteDance

Another problem with technology bans is the rising quality of foreign products. TikTok is a good example of this. Consumers have known for around two years now that TikTok engages in discriminatory censorship and that it harvests location, voice, and finger print data from its users, many of whom are minors. Even so, millions of Westerners use the app because of its entertaining content and attractive user interface. In short, foreign products are of a much higher quality than previous technology arms races. The U.S. has traditionally been an exporter of cutting edge tech products and isn’t used to receiving competitive goods from non-allied states. Western consumers don’t usually think about international politics when deciding to download an app or post content. They just want high quality products. We simply aren’t used to the idea that foreign goods and services could pose a threat to our well being,

Ziggi Tyler, pictured above, says TikTok blocked activist statements in his bio and then punished his account when he spoke out.

One solution to international AI discrimination is to keep markets open but prohibit companies in non-allied countries from collected user data in the U.S. and across the West. As noted, this will be tricky due to American lobbying efforts and the threat of repercussions. Another possible solution is to boycott applications that use data to discriminate. If we commit to educate ourselves on the products we use and avoid those that stifle democratic values, we can effectively prevent foreign influences from discriminating against minority groups. One of the most powerful tools at our disposal is our ability to chose what we download and where we interact online. Let corporate bigots feel the sting of lost profits. Together we can protect the rights of minorities and maybe even change the course of AI development forever. If you are passionate about activism, one of the best things you can do is ironically very passive. Avoid services that use their data irresponsibly until they commit to making products with liberty and justice for all.


China holds more data than any other country. Here’s why, and what to do about it.

Data privacy is a hot topic that has politicians from both ends of the political spectrum threatening regulation and punishment for U.S. based companies that secretly gather data from unsuspecting users. However, the biggest threat to data privacy isn’t any American or European business model. China harvests more data than any other country or corporation on earth. Here’s why.

A rendition of China’s autonomous military vehicle “Guizhou Soar Dragon”

China has long known that artificial intelligence will come to play a lead role in controlling international politics. They already use complex AI to monitor Chinese citizens and international adversaries, and are currently working on fully automated intelligent weaponry. China has pressured itself to be the number one collector of big data because data sets function as the “fuel” for AI technology. In order for a neural network to learn, it has to have a source of information to learn from. This is why Beijing has prioritized and subsidized research and development of data mining devices at an unprecedented level. According to the Pentagon, China now holds over 30% of the world’s data, and Tech Advocate Russ Shaw says that China is amassing “unprecedented amounts of data, unlike anything we are seeing in Europe and the U.S. The combination of advanced technology and government backing has allowed the country to harness the power of its enormous population.”

Joe Biden and Xi Jinping meet in 2011 while serving as Vice Presidents

Most of China’s data comes from independent businesses that function in a mostly free market economy. Like the US, China lets startups rise and fail according to popular demand. This sets the Middle Kingdom apart from other historically communist countries that couldn’t keep up with Western technological advances, like the Soviet Union during the Cold War. China knows that market incentives will bring the best AI to the top, and millions of Chinese citizens used to poverty and chronic hunger are now battling to be the best data collectors on the market. This makes the Chinese R&D development look a little like the U.S.’s. There are government subsidies, market forces, and little regulation of tech start ups. However, two things separate China’s AI situation from the US’s, and both things give Beijing the upper hand.

The first thing that China has and the U.S. lacks is a willingness to share information. Chinese citizens don’t worry about ride sharing apps and social networks collecting their data. Chinese culture has been preaching central authority and a strong sense of community for millennia, and most citizens are comfortable (or at least accustomed) to constant surveillance. According to China’s 500 Startups manager, “Chinese citizens [are] really proud of the fact that we’re actually big enough to even be able to compete with the U.S. in terms of AI. And I think it is just a really exciting time to be in China.” Data collection is seen as a good thing, and sharing personal info is a patriotic duty. Chinese citizens know that data collection will strengthen their presence on the international stage.

The U.S. has a very different relationship with data privacy. Personal liberty and individual rights are at the core of American values. Starting with the Revolutionary War, independence and privacy have shaped the U.S.’s legal system and mainstream culture. Americans will forgo life-saving vaccines and retreat to “off-grid” rural hide outs in order to protect their personal information. Often these fears seem to have little or no practical rational, and yet citizens prove time and again that they are willing to reshape their lives in order to escape surveillance.

It is no wonder that Americans feel uncomfortable when companies want to track them. The very concept of personalized advertisement has caused an uproar across the West. Americans are far from allowing complete insight into their day to day the way the Chinese do. In the U.S., a complex legal schema and cultural indoctrination keeps Americans resistant to any sort of data mining practice, and not without reason. Identify theft, exploitation, and cyber attacks reek havoc on millions of Americans every year. However, resistance to data tracking is often not the direct result of fear of cyber crimes. Instead, Americans use such instances to justify their long-standing devotion to privacy. This means that Americans gather less data, and therefore cut themselves off from future innovation.

Man critiques big data in anti-surveillance protests of 2013

Concern for privacy, while well intentioned, will end up having terrible consequences for the American populous if it gives Chinese technology the upper hand. Platforms like the Chinese owned Tik Tok have already proven that Eastern tech can be popularized in the West, and that such technology can be used to gather large swathes of data without users noticing. Tik Tok recently updated it’s terms and conditions to allow for the tracking of biometric data, including things like voice patterns and finger prints. When government tries to step in and prevent this kind of surveillance, like what the Trump administration tried to do with Tik Tok, Chinese leaders threaten retaliation in the form of economic hostility.

As Chinese technology becomes increasingly effective and popular, it will be harder and harder to regulate the data mining of American citizens. Even if legal actions bind companies from gathering certain types of data, it will not be difficult for China to secretly steal information and escape extradition. As we are learning from recent ransom war attacks, foreign enemies have no problem with housing online criminals that attack the United States. There is no reason to believe that Chinese technology will cut back on it’s data collection, and precedent dictates that it will only increase and become harder to detect.

Hong Kong protesters topple a smart lamppost with surveillance capabilities

China has a cultural advantage over the US when it comes to AI development, but culture isn’t their only upper hand. China also has a strong central authority and a history of government interference with business. This means that is easier for China to use data for military purposes. The United States, on the other hand, boasts a complex and sensitive relationship between government and corporations. Big AI companies like Google and Amazon have a hard time forging business relationships with federal government. Google employees recently lobbied the company to pull out of billion-dollar defense contracts, and many firms don’t see eye to eye with the US Department of Defense. Data mined by US companies hardly ever makes it’s way to the Pentagon, which means that already scarce data is withheld from government use.

China doesn’t have as many issues with government control of enterprise. It has already fined large tech companies who were or were poised to trade on the New York stock exchange. Jack Ma was notably silenced after criticizing Beijing’s reluctance to allow his fintech company Ant Group an IPO. Chinese leaders fear that public investment into their tech giants will give American shareholders ownership of their swathes of data. Beijing knows that it has the advantage of more information, and they want to keep it that way. Currently, Beijing can seize data from Chinese companies without any international repercussions. However, once American shareholders own said data, their method of funneling privately-collected information into their central government will become more difficult. It will also give investors the ability to use Chinese data as they please. This is not in China’s best interest, at least according to the CCP. They envision a world where foreign data flows into China without any of their own data leaking out. China wants an information monopoly, and they are on their way to getting one.

Jack Ma’s reappearance after a disappearance that followed his critique of CCP’s business regulations

If we want to avoid Chinese surveillance and AI dominance, we should do what it takes to keep America at the front of tech advancements. In the case of AI, this means better relationships with the federal government and fewer worries about privacy. It is a noble thing to resist government contracts if the motivation is to avoid human suffering. It is also good that American citizens take their privacy and independence seriously. However, we are at a pivotal moment in time where reluctance to innovate will lead to a loss of Western dominance and our way of life. If you value privacy, personal freedoms, and peace, then the worst thing to do right now is fight data collection and AI development. Resistance today could mean Chinese domination tomorrow, with products that track our every move and weaponry to keep us in line. It sounds dystopian, and no reigning power ever likes to imagine their fall. However, we would do well to learn from past world leaders who held on to the old ways of doing things and lost their grip on the world stage. Support Western AI, your data will be safer if you do.


Why you should get paid for your data

Students at M.I.T. have created “clothes” that can collect and store data. The wearable fabric is made of special fiber that could one day capture digital information about your body and lifestyle. Optimists hope the technology will be used to help monitor the health of the wearers or keep them away from dangerous locations. While these possibilities are enticing, we must wonder wether anyone would actually purchase clothing that would allow corporations to essentially spy on consumers. The West is already fed up with electronics that track their activity for advertising purposes, and some go so far as to avoid benign things like vaccines and 5G internet due to fear that such innovations could lead to invasive monitoring and infringements on privacy. These qualms are mostly unfounded, but they reveal how many are willing to avoid life-saving products in order to regain some sense of privacy. If Western consumers are already holding tight onto their personal data, will they ever accept products that collect even more information? Should they?

The world’s first digital fibers. (Photo by Roni Cnaani)

Some privacy concerns are legitimate. Consumers complain that data collection can lead to identity theft and exploitation, and that it can be a little unnerving. However, we know that data collection fuels AI innovation, which will play a huge role in the international balance of power. China currently holds 30% of the world’s data, a number that already surpаsses the U.S. and other western nations. If this data gap continues to widen, the AI arms race will lean further and further towards China. The CCP has named AI world dominance as a top priority, and doesn’t show any reluctance in using AI technology to surveil, “grade”, and even enslave their own citizens. A world in which China holds military and commercial AI superiority could greatly harm Western society and its core values. Weaponized AI will strengthen China’s cyber warfare and their ability to develop lethal autonomous weapons (LAWS). Superior commercial AI will make it more likely that westerners purchase Chinese products, making their international surveillance even easier.

No matter how they feel about data collection, both critics and proponents of AI agree that China’s AI program and their flavor of governance would be bad for the West. But how will we ever win an AI arms race with a serious drought of data?

A Chinese Lethal Autonomous Weapon powered by AI

Free market trends could take care of the job for us. One reason consumers opt out of data sharing is because they don’t receive anything in compensation, besides perhaps some embarrassingly personalized ads. We can change AI development in the United States and Europe by purchasing data from the citizens who generate them. A good example of this is receipt hog, the a data-buying app that asks users to trade pictures of their receipts for virtual tokens that can be used to redeemed gift cards and prizes. The app even explains to consumers where their data will be used. Of course, the process is still not completely transparent, but it is a step towards responsible handling of personal data.

Visual from the AI for Good Global Summit

The Fourth Industrial Revolution is already here, and the industry will continue to innovate life changing products like the health monitoring fabric from M.I.T. The way to make sure our AI production remains first rate is to give consumers a reason to share their data. This will help fuel stronger national security, but it will also make cutting edge products easier to swallow. If something called Receipt Hog can convince people to willingly share information, imagine how much easier it could be for a live saving invention. Current business models take the value of collected data into account and factor this into price, which allows companies to offer tangible products at a reduced cost and digital products (like apps and social networks) for free. However, as concern over data rights rise, this model may prove suboptimal. Offering consumers money for using a product, coupled with transparent explanations of where their data is going, will make AI adoption much more attractive in the West. We have to stay ahead in this technology boom, so we might as well might as well make some money in the process. We as consumers should be given the opportunity to sell our data. Data rights will protect the world balance and accelerate our journey to the future of human civilization.

Many want to ban certain types of data collection all together. While it is good to point out ethical issues and demand responsibility from corporations, ending data collection will not solve our privacy problems. In fact, if our lack of data allows China to control AI production, it may even increase them. Instead, let’s start asking for the right to be paid for our information. Together we can change the future.


The Rehab rip-off: smart apps are more helpful than conventional rehab

Being an addict is trying and tiring. Most Americans don’t appreciate what it takes to kick an addiction. People often have believe that a really good month at an in-patient rehabilitation is all a person needs to “get clean and stay clean” (in-patient means that patients live in the clinic). Unfortunately, the expensive in-patient rehab programs that give addicts constant 24 hour support, relaxing horseback rides, delicious food and great accommodations are not effective at fighting addictions. Rehab centers know this, but they prefer to extort millions out of desperate, misguided parents.

Addiction Rehab is a $42 billion industry. (Photo by fotografierende)

Why isn’t in-patient rehab effective? There are a few reasons. Many of the most popular (and expensive) programs include things like equine therapy and nature retreats that have little to no empirical evidence to verify their efficacy. Some types of treatment, like confrontation treatment, make addictions even worse. But even when behavioral health professionals use more effective methods, their in-patient programs fail too often. The Blake family spent over $110,000 in premium rehab programs only for their son to die of an overdose at 27. They, like many families, were duped into believing that if they spent all they could on fancy therapies they could save their son’s life. Why are even the best in-patient programs still so unsuccessful? It is because staying clean in a wonderful, accommodating facility is very different from staying clean in day to day life. Most rehab programs offer little, if any, follow up for their patients. They take their money and send them back out into the world ill-prepared and poorer than they were before.

The Blakes with their son at his High School graduation

Now, addicts have other tools to help thanks to the power of AI and machine learning. A number of scientists and addiction experts have invented applications that offer personalized, effective addiction help. Sam Frons is a former addict who was unsatisfied by rehab programs and support groups like AA. That’s why she created Addicaid, an AI powered app that tracks behavior and location to identify when a user will be more susceptible to relapse and offer them support. Addicaid has won top recognition and investments from business and health leaders.

Sam Frons, founder of Addicaid

A group of researchers created a similar program, called Addiction CHESS, helps users avoid dangerous locations and find the best supports groups. Their peer reviewed research found that Addiction CHESS cut the risk of relapse in half compared to patients who received only traditional rehab methods. Another group of researchers at USC created an AI-backed program that creates support groups for teen patients that are more effective. Many support groups accidentally introduce lighter users to more serious addicts and heavier drugs. USC has also developed a similar algorithm that uses AI to help prevent teen suicide. AI has proven itself a better form of addiction recovery than traditional rehab alone.

Of course, some people do benefit from traditional rehab, the same way that even price-gouging pharmacies can provide useful medicines. However, the evidence is clear that inexpensive algorithms can offer more help than hundreds of thousands of dollars in in-patient care. If you or a loved one is considering enrolling in a rehab program, make sure to find one that offers quality, cost-effective, individualized care. And don’t forget to use these other AI tools to keep you and your loved ones on the path to recovery. If you are an investor looking to make good returns on the rehab market, invest in AI tools. They are growing fast, more effective, and will soon become one of the primary rehabilitation methods. Whether we are looking to save money or save a life, AI has got our back.


Scientists use A.I. to make money from crypto scammers

Ever heard of the pump and dump? For those that aren’t familiar with thar term, a “pump and dump” refers to one of the oldest (and easiest) stock exchange scams in the book. Here’s how it works:

  1. Scammers invest in a worthless asset
  2. Scammers convince others to invest in the same worthless asset
  3. The value of the worthless asset skyrockets and other people buy in thinking the stock is actually valuable
  4. The original scammers sell their shares at an enormous profit before the value comes crashing down as other people sell off their shares.
  5. In the end, most of the people who were convinced by the scammers and the innocent bystanders lose money while the scammers profit
Photo by Pixabay

Today, this scheme is easier than ever to pull off thanks to cryptocurrency. Pump and dumps of traditional stocks are illegal, but crypto is a legal grey area that doesn’t have the same strict regulation yet. Crypto also makes trades fast and anonymous, both of which make scams easier to pull off. Some crypto pump and dump schemes are enormous, like the “Binance Pump Signals” group on Telegram that hosts over 400,000 members! These crypto pump and dump groups often orchestrate multiple “pumps” a day, each pump lasting no more than a few minutes and often less than one minute. The creators of the pump scheme will purchase a worthless crypto coin hours or days in advance and then reveal which coin will be pumped to everyone in the group at once. This way, the leaders of the scheme are sure to make a profit because they know which coin will be use before hand. Everyone else scrambles to invest low and then sell high before the pump is over, and many group members end up losing money.

Photo by Alesia Kozik

Pump and dumps are interested, but basically illegal and very volatile for everyone except inner circle that controls the scheme. However, two data scientists, Jiahua Xu and Benjamin Livshits, discovered a legal way to make constant and reliable returns off of crypto pump and dumps. How did they do it? With machine learning, of course! Modern problems require modern solutions.

Xu and Livshits trained an algorithm to identify when crypto coins were going to be “pumped” by analyzing purchases of different coins. The actual science is pretty complicated, but the general idea is very simple. The algorithm was taught to identify when the conspiring leaders of a pump and dump scheme were buying into a crypto coin in preparation for their next pump. Because there are hundreds of active pump and dump groups, the A.I. analyses thousands of crypto coins to identify which ones would be pumped next. This allows investors to buy into crypto coins when they are still cheap and then sell out once the value is inflated. Essentially, the algorithm lets anyone into the inner circle of crypto pump and dumps. It allows users to make money off of these scammers, instead of losing money to them.

Photo by Andrea Piacquadio

The best part about this “trading strategy” is that it offers consistent returns. Xu and Livshits claim that the algorithm could provide a 60% return on investment in under 3 months. That’s not quite the sky-high return that pump and dump scammers promise, but it is safer and reliable and, perhaps most importantly, legal.

Young investors excited about crypto and the “democratization of Wall Street” should consider learning more about intelligent algorithmic predictions. In this wild west of blockchain currencies and meme stocks, it might be worth our time to invest in some A.I.


Can we trust self-driving cars?

When a car that claims to be “self-driving” is involved in an accident, the whole country finds out about it. Machine learning and big data have created a new era of smart technology, (what experts call the fourth industrial revolution) which means that we as a society are actively deciding what jobs robots should and shouldn’t be allowed to do. Lots of different tech advancements are thrown around in the debate on A.I., but no invention is as commonly praised or critiqued as the self-driving car. People who support self-driving cars argue that they prevent accidents, reduce air pollution, and make morning commutes more productive. Those on the other side of the debate don’t deny that a driverless car would be great in theory, they only complain that such a car has not been invented yet and probably never will be. The nation is just waiting to see if a safe self-driving vehicle will ever actually come to market.

Photo by Pixabay

It is because of this debate that news of self-driving car accidents spreads across the world faster than a virus. A Tesla Model S crashed and caught on fire in Texas earlier this spring. Before the flames were put out, rumors that the car had been on “Autopilot” had already sped across the country. Even major news outlets like the Wall Street Journal propagated the fake story. Later, a government organization called the National Transportation Safety Board concluded that the Model S’s autopilot function could not have been in use at the time of the accident because a few of the autopilot’s requirements, like a paved roadway, were not present. The truth was eventually revealed after professional analyses and some angry tweets from Elon Musk, but no one found the true story as interesting. Regardless of whether Tesla Model S’s are safe or not, stories of driverless vehicle crashes endanger us all. We are eager to learn of self-driving car malfunctions because we want to know if we can trust them, and media conglomerates exploit this eagerness to pander distorted and fabricated stories of autopilot failures and driverless deaths.

Photo by Matheus Bertelli

These rumors of unsafe driverless vehicles put us all in danger because they delay the development of life-saving transportation. Self-driving cars are already safer than human operated ones, and they continue to get safer with every update. Driverless vehicles could save millions of lives a year if we would just trust them more. People have a hard time relinquishing control over a dangerous activity like driving to a machine, and so we choose to drive cars ourselves even though it is objectively more dangerous. Millions of Americans have already died from their reluctance to purchase a self-driving vehicle. Would you give away your life, or a loved one’s life, just so that you didn’t have to trust a machine to get you home? Probably not, and yet this is the deal Americans make when they avoid driverless cars. The autopilot in self-driving cars is “driven” by A.I. Each new point of data helps the car learn how to react in different situations. An early accident in a self-driving car occurred when the autopilot didn’t recognize a white trailer. A human died because the car didn’t have enough data about white trailers. The good news about such a malfunction is that, unlike human error, it can be completely eradicated in the future with more data and development. Self-driving cars are already safe, and they would be even safer if they had more data. The problem with data is that the cars acquire data by being driven, and not enough people drive them. Of course, you shouldn’t risk your life just for a few more data points. Since driverless cars are already safer than the one you own now, buying one wouldn’t mean risking your life to collect data, it would mean saving your life and collecting vital data on top of that. If people are too hesitant to buy a driverless vehicle, then safety issues stay around for longer, which turns off potential users, and continues the cycle of unnecessary death.

Photo by Pixabay

What can we do to fight this cycle of fear? The best thing to do would be to buy a driverless vehicle and take it all over the country, but that option is financially improbable for many. The next best thing would be to calm the fears of those who oppose self-drivers. Remind your friends and neighbors of how safe driverless cars actually are. Tell your eco-friendly coworkers that self-driving cars will protect the environment. The U.S. Department of Energy found that driverless vehicles could reduce energy consumption by as much as 200%.

Big opponents of driverless vehicles know that these cars could get rid of gas and other outdated industries, and they use scary stories to spook Americans away from self-drivers. In the end, our reluctance is what will kill us. In this case, curiosity will save the cat. Don’t throw away your life, rely on A.I. cars.


AI can end inequality in public schools

When artificial intelligence is discussed in the media, it is often portrayed as an expensive technology that will only benefit the elite who are wealthy enough to invest in AI business or purchase high tech products. News outlets, politicians, and even some field experts seem to think that it will be a long time before working class Americans see some (if any) direct benefit from the “fourth industrial revolution.” That view may be overly pessimistic, as engineers and entrepreneurs continue to come up with inexpensive smart technologies that have the potential to revolutionize life for America’s poor and middle class. One such technology aims to keep a promise that has been made (and broken) in public schools across the country since 2002; to leave no child left behind.

Photo by Julia M Cameron

One of the biggest challenges that teachers face regarding disadvantaged students is simply knowing what students need. This is especially true in elementary schools where children are often not independent enough to ask for help with a specific material or open up about problems at home. Studies show that teachers often misdiagnose children with emotional and or learning disorders as simply disobedient, undisciplined, or lazy. Teachers are even more likely to misdiagnose disadvantaged students when they are minorities. A recent study found that black students are much more likely to “be seen as problematic” and punished at school than white students, even when they portray comparable behavior. The effect of unequal punishment is compounded by the fact that minority students are more likely to come from single-parent families, suffer from emotional distress, and experience chronic hunger, all of which can negatively impact a child’s ability to learn. Teachers in the US have a hard time discerning when a student needs extra attention or outside help, and often attempt to address problems in the classroom using punishment, not encouragement.

Photo by Christina Morillo

Some teachers treat students poorly just based on race and poverty. However, even teachers who treat their students with equal care and respect have a difficult time knowing when and what sort of help disadvantaged students need in order to not fall behind. This is where AI technology can help.

Wouldn’t it be great if there was some way to know what sort of problems a student was dealing with, and what methods would best help her? Well, there might be a way to do just that. Some schools in Europe and Asia have already begun using facial analyses tools to detect when students struggle in the classroom. One school in France uses Nestor software to record when students tend to pay attention, and more importantly, when they don’t. This data is then used to help professors adapt their teaching style to make learning easier for students.

Schools in China use what they call “smart eyes” to track student behavior in class rooms. This may sound like an Orwellian nightmare, but the data isn’t just used to punish students for misbehaving behind teachers’ backs. The main purpose of this technology is to identify when students are experiencing abnormal levels of stress, have a hard time staying awake, or display warning signs of illness. Teachers can then use this data to help identify children who could benefit from different teaching styles or outside help.

In the United States, this technology could be regulated to protect data from being stored for long periods of time or being sold/distributed. AI classroom helpers could close the gap that leaves disadvantaged children behind. Parents might be hesitant to allow smart technology into the classroom, but under the right supervision it stands to make a world of difference for our most precious commodity. An investment in classroom AI is an investment in our future.


Make pictures from words: AI turns phrases into realistic images

Goodbye, graphic design. OpenAI, the artificial intelligence company co-founded by Elon Musk, released a shocking new report that has entrepreneurs dreaming and investors drooling. OpenAI engineered a neural network named DALL-E using its language prediction model GPT-3 to create realistic images from just a few words in English. To be clear, there was no coding involved in the input, only simple phrases like “an armchair in the shape of an avocado” or “a painting of a capybara sitting in a field at sunset” and DALL-E would instantaneously create dozens of different unique images accordingly. Below are some of the images DALL-E was able to create, along with the input it received.

When told to create “an armchair in the shape of an avocado”:

When asked to create “a cube made of porcupine”

These examples are groundbreaking because the ideas expressed are completely novel. A cube made of porcupine does not exist in reality, and DALL-E had never seen one before creating many different examples of what such a thing could look like. This means like technology like DALL-E could quickly take the place of animators, fashion designers, interior decorators, and perhaps ever architects. The ability to create plausible models from novel ideas has long belonged to humans and humans only. However, as artificial intelligence learns more about language and its relation to visual data, it may trump humans in certain realms of creativity.

Here is DALL-E creating furbished rooms and stylish mannequins from just a few words.

This technology is still budding. Obviously, there are still kinks to be worked out and some rough edges that need smoothing, but soon the ability to create realistic images of anything writable will be widely available. Now, it is up to entrepreneurs and business-minded individuals to figure out where there is most demand for such capacities. Areas involving design immediately come to mind, but there are undoubtedly other less-obvious uses for this kind of tech. The race is on, and while big companies like OpenAI may have a head start, there is still room for underdogs with big aspirations and creative ideas to find a place where text-to-image technology will be useful and profitable.