Category Archives: proposals

Phil 8.13.20

Ride through the park today and ask about pavilion rental – done

Māori Pronunciation

Iñupiaq (Inupiatun)

GPT-2 Agents

  • Rewrite intro, including the finding that these texts seem to be matched in some way. Done
  • Uploaded the new version to ArXiv. Should be live by tomorrow
  • Read Language Models as Knowledge Bases, and added to the lit review.
  • Discovered Antoine Bosselut, who was lead author on the following papers.Need to add them to the future work section
    • Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering
      • Understanding narratives requires dynamically reasoning about the implicit causes, effects, and states of the situations described in text, which in turn requires understanding rich background knowledge about how the social and physical world works. At the core of this challenge is how to access contextually relevant knowledge on demand and reason over it.
        In this paper, we present initial studies toward zero-shot commonsense QA by formulating the task as probabilistic inference over dynamically generated commonsense knowledge graphs. In contrast to previous studies for knowledge integration that rely on retrieval of existing knowledge from static knowledge graphs, our study requires commonsense knowledge integration where contextually relevant knowledge is often not present in existing knowledge bases. Therefore, we present a novel approach that generates contextually relevant knowledge on demand using generative neural commonsense knowledge models.
        Empirical results on the SocialIQa and StoryCommonsense datasets in a zero-shot setting demonstrate that using commonsense knowledge models to dynamically construct and reason over knowledge graphs achieves performance boosts over pre-trained language models and using knowledge models to directly evaluate answers.
    • COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
      • We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017). Contrary to many conventional KBs that store knowledge with canonical templates, commonsense KBs only store loosely structured open-text descriptions of knowledge. We posit that an important step toward automatic commonsense completion is the development of generative models of commonsense knowledge, and propose COMmonsEnse Transformers (COMET) that learn to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of commonsense modeling, our investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs. Empirical results demonstrate that COMET is able to generate novel knowledge that humans rate as high quality, with up to 77.5% (ATOMIC) and 91.7% (ConceptNet) precision at top 1, which approaches human performance for these resources. Our findings suggest that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.

GOES

  • 10:00 sim status meeting – planning to fully evaluate off-axis rotation by Monday, then characterize Rwheel contribution, adjust the control system and start commanding vehicle rotations by the end of the week? Seems ambitions, but what the hell.
  • 2:00 status meeting
  • Anything about GVSETS? Yup: Meeting Wed 9/16/2020 9:00 AM – 10:00 AM

JuryRoom

  • 5:30 meeting. Discuss proposal and additional meetings

Book

  • Transfer more content

Phil 8.10.20

Really good weekend. I feel almost recharged to this time last week 🙂

Language Models as Knowledge Bases (Via Shimei)

  • Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as “fill-in-the-blank” cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at this https URL.

#COVID

  • Currently at 103, 951 tweets translated

JuryRoom

  • Write reference section – done

GOES

  • I need to do an incremental rotation to track the reference points from last week
  • Still having problems with the secondary rotation. I’m clearly doing something basic wrong
  • Meeting with Vadim

GPT-2 Agents

  • Create a word cloud for multiple passes of “She came into the room”
  • Add something about the place for qualitative research in a Language model sociology. Outliers are the places that the models learn to ignore. So traditional research will be the way that these marginalized populations are not forgotten.
  • Screwed up the ArXiv bibliography submission. Fixed

ICTAI

  • Started reading the last paper, which is on <shudder> ontologies

Phil 8.7.20

#COVID

  • The Arabic translation program is chunking along. It’s translated over 27,000 tweets so far. I think I’m seeing the power and risks of AI/ML in this tiny example. See, I’ve been programming since the late 1970’s, in many, many, languages and environments, and the common thread in everything I’ve done was the idea of deterministic execution.  That’s the idea that you can, if you have the time and skills, step through a program line by line in a debugger and figure out what’s going on. It wasn’t always true in practice, but the idea was conceptually sound.
  • This translation program is entirely different. To understand why, it helps to look at the code:

translator

  • This is the core of the code. It looks a lot like code I’ve written over the years. I open a database, get some lines, manipulate them, and put them back. Rinse, lather, repeat.
  • That manipulation, though…
  • The six lines in yellow are the Huggingface API, which allow me to access Microsoft’s Marian Neural Machine Translation models, and have them use the pretrained models generated by the University of Helsinki. The one I’m using translates Arabic (src = ‘ar’) to English (trg = ‘en’). The lines that do the work are in the inner loop:
    batch = tok.prepare_translation_batch(src_texts=[d['contents']])
    gen = model.generate(**batch)  # for forward pass: model(**batch)
    words: List[str] = tok.batch_decode(gen, skip_special_tokens=True)
  • The first line is straightforward. It converts the Arabic words to tokens (numbers) that the language model works in. The last line does the reverse, converting result tokens to english.
  • The middle line is the new part. The input vector of tokens is goes to the input layer of the model, where they get sent through a 12-layer, 512-hidden, 8-heads, ~74M parameter model. Tokens that can be converted to English pop put the other side. I know (roughly) how it works at the neuron and layer level, but the idea of stepping through the execution of such a model to understand the translation process is meaningless.
  • In the time it took to write this, its translated about 1,000 more tweets. I can have my Arabic-speaking friends to a sanity check on a sample of these words, but we’re going to have to trust the overall behavior of the model to do our research in, because some of these systems only work on English text.
  • So we’re trusting a system that we cannot verify to to research at a scale that would otherwise be impossible. If the model is good enough, the results should be valid. If the model behaves poorly, then we have bad science. The problem is right now there is only one Arabic to English translation model available, so there is no way to statistically examine the results for validity.
  • And I guess that’s really how we’ll have to proceed in this new world where ML becomes just another API. Validity of results will depend on diversity on model architectures and training sets. That may occur naturally in some areas, but in others, there may only be one model, and we may never know the influences that it has on us.

GOES

  • More quaternions. Need to do multiple axis movement properly. Can you average two quaternions and have something meaningful?
  • Here’s the reference frame with two rotations based off of the origin, so no drift. Now I need to do an incremental rotation to track these points:

reference_frame

GPT-2 Agents

  • Start digging into knowledge graphs

Phil 8.6.20

Coronavirus: The viral rumours that were completely wrong (BBC)

An ocean of Books (Google Arts & Culture Experiments)

bookocean

Hopfield Networks is All You Need

  • We show that the transformer attention mechanism is the update rule of a modern Hopfield network with continuous states. This new Hopfield network can store exponentially (with the dimension) many patterns, converges with one update, and has exponentially small retrieval errors. The number of stored patterns is traded off against convergence speed and retrieval error. The new Hopfield network has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. Transformer and BERT models operate in their first layers preferably in the global averaging regime, while they operate in higher layers in metastable states. The gradient in transformers is maximal for metastable states, is uniformly distributed for global averaging, and vanishes for a fixed point near a stored pattern. Using the Hopfield network interpretation, we analyzed learning of transformer and BERT models. Learning starts with attention heads that average and then most of them switch to metastable states. However, the majority of heads in the first layers still averages and can be replaced by averaging, e.g. our proposed Gaussian weighting. In contrast, heads in the last layers steadily learn and seem to use metastable states to collect information created in lower layers. These heads seem to be a promising target for improving transformers. Neural networks with Hopfield networks outperform other methods on immune repertoire classification, where the Hopfield net stores several hundreds of thousands of patterns. We provide a new PyTorch layer called “Hopfield”, which allows to equip deep learning architectures with modern Hopfield networks as a new powerful concept comprising pooling, memory, and attention. GitHub: this https URL

Can GPT-3 Make Analogies?. By Melanie Mitchell | by Melanie Mitchell | Aug, 2020 | Medium

#COVID

  • Going to try to get the translator working and inserting best effort into the DB. They we can make queries for the good results. Done! Here’s a shot of it chunking away. About one translation a second:

translation

GOES

  • Work on quaternion frame tracking
  • This might help with visualization: matplotlib.org/3.1.1/api/animation_api
  • Updating my work box. Had a weird experience upgrading pip. It hit a permissions issue and failed out without rolling back. I had to use get-pip.py to get it back
  • Looking good:

rotate_to_point

JuryRoom

  • 5:30(?) meeting
  • Project grant application

ICTAI

  • Write review – done. One to go!

 

Phil 6.29.20

ACM IUI 2021 is the 26th annual premier international forum for reporting
outstanding research and development on intelligent user interfaces.

  • ACM IUI is where the Human-Computer Interaction (HCI) and Artificial 
    Intelligence (AI) communities meet, with contributions from related fields 
    such as psychology, behavioral science, cognitive science, computer 
    graphics, design, the arts, and more. Our focus is on improving the
    interaction between humans and digital technology, by leveraging both HCI
    approaches and state-of-the art AI techniques from machine learning,
    natural language processing, data mining, knowledge representation and
    reasoning.

GOES:

  • Ping Erik about collaborative VR coding environments. Done
  • 2:00 Meeting with Vadim
    • Walked through the deep hierarchy example
    • He’s now running 4 wheels and starting to get close, though the RW speed plots are not close to the actuals. It makes me think that there is more feedback control in the satellite implementation than there is implied in the documentation.

Proposal

  • After digging into the existing text, we realized that a lot of the technical sections were flat wrong, and depended on a kind of “magical ML thinking” that should have been in our phase III. So, lots of writing.

GPT-2 Agents

  • Working on trajectory plotting
  • Fix the listbox select. I was using the wrong event. It should be like this.

ListBoxSelect

  • Aaaand then there were a bunch of weird errors. For some reason, the call to a new ListBox also calls the previous ListBox with no args(?) so I get an error. Chased down and fixed.
  • Plot main line. Done!

NodeLine

  • Plot legal connections of closest lines
    • I think this can be done by looking at the nodes that are connected to the start (current) node, then looking at the coordinates of all the children. The one that is closest to the line and between the current and the target gets added to the list
    • Plotting all the node connections so there can be a sanity check:

NodeLine

  • Use the weight of the lines to choose the lines
  • Build a narrative rutter that describes the route (Here be there stampedes!)

Phil 6.26.20

Let’s not forget that things are not going well here:

DtZ

python_data_libs

Many useful links in the replies (like Stumpy for time series)

GPT2-Agents

  • Working on plotting nodes correctly, being able to select them, then plotting closest legal moves that reach a destination
  • Got node selection working!

2020-06-26

With loaded nodes as well. I have an issue where the callback from the mouse ius happening before the selection in a list, so I need to fix that:

2020-06-26 (1)

GOES

  • 10:00 Meeting with Vadim
  • Realized that I had been too fancy to remember how to deal with commands to individual controllers. Figuring that out now – done!

Book

  • Meeting with Michelle to discuss editing – went very well. Sent her a copy of the “book” version of the dissertation

Phil 6.25.20

Latent Embeddings of Point Process Excitations

  • When specific events seem to spur others in their wake, marked Hawkes processes enable us to reckon with their statistics. The underdetermined empirical nature of these event-triggering mechanisms hinders estimation in the multivariate setting. Spatiotemporal applications alleviate this obstacle by allowing relationships to depend only on relative distances in real Euclidean space; we employ the framework as a vessel for embedding arbitrary event types in a new latent space. By performing synthetic experiments on short records as well as an investigation into options markets and pathogens, we demonstrate that learning the embedding alongside a point process model uncovers the coherent, rather than spurious, interactions.

Misinformation, Crisis, and Public Health—Reviewing the Literature

  • The Covid-19 pandemic has been accompanied by a parallel “infodemic” (Rothkopf 2003; WHO 2020a), a term used by the World Health Organization (WHO) to describe the widespread sharing of false and misleading information about the novel coronavirus. Misleading information about the disease has been a problem in diverse societies around the globe. It has been blamed for fatal poisonings in Iran (Forrest 2020), racial hatred and violence against people of Asian descent (Kozlowska 2020), and the use of unproven and potentially dangerous drugs (Rogers et al. 2020). A video promoting a range of false claims and conspiracy theories about the disease, including an antivaccine message, spread widely (Alba 2020) across social media platforms and around the world. Those spreading misinformation include friends and relatives with the best intentions, opportunists with books and nutritional supplements to sell, and world leaders trying to consolidate political power.

GPT-2 Agents

  • Well, networkx can write a gefx file that Gephi can read, but not the other way around.
  • Networkx CAN read and write gml files, though. Switching to that.
  • That seems to be working well:

gml_read_write

  • Now let’s see if we can draw it in the app
  • Things are starting to get very specific. creating a subclass
  • Pulling attributes is not obvious. Here’s how you do it for the nodes read in from gml:
    attrs = nx.get_node_attributes(self.gml_model, 'graphics')
    for key, val in attrs.items():
        print("{} = {}".format(key, val))

 

  • Loading and displaying the nodes! Next, I need to get piece data from the database. Also, since the graphics attribute can be a dictionary, it may be possible to add attributes like that to the edge data? Then I won’t need to re-access the db. Conversely, another way to do this might be to update the table in the db with positions, etc. Hmmmm

GraphNavigator

GOES

  • 10:00 Meeting with Vadim. Nope, he broke his code. Rescheduled for tomorrow

Proposal

  • Work on technical section with Aaron?

Phil 6.24.20

GPT-2 Agents

  • Starting work on the navigator app
  • Today’s progress:

TkCanvasBase

  • I think that this can be the core of the initial navigation capability for any corpus. You should be able to identify a topic on the map or in the list, and the system will figure out the most direct route (linear distance).
  • I think there also needs to be an ability to see the directly connected neighbors as well, since they might be farther away due to mapping constraints. For example, we can see that d2 is linked directly to d7, which is almost completely across the board. This is the result of the white queen making a pretty aggressive move. It’s not common, but it does happen. It might be interesting for someone working their way from arithmetic to calculus to see, for example, how Johann Carl Friedrich Gauss did it:

nearest

GOES

  • 10:00 Meeting with Vadim
    • We’re going to try to get a single RW to move the vehicle through two successive 90-degree maneuvers, then verify that everything is working correctly on the other RWs, then go to RW sets
  • 2:00 Status meeting

Proposal

Phil 4.17.20

Can You Beat COVID-19 Without a Lockdown? Sweden Is Trying

I dug into the predictions that we generate of daystozero.org. Comparing Finland, Norway, and Sweden, it looks like something that Sweden did could result in about 2,600 people dying that don’t have to:

FinNorSwe

D20

ASRC

  • IRS proposal – done!
  • A better snippet: the best way to cheat on taxes is  to deliberately lie to the IRS about what you earned over a year, what you spent over a year, and the ways you would fill out those forms. This is where “time of year” really comes into play. The IRS assumes you worked on April 15 through the 15th of the following year in order to report and pay taxes on your actual income from April 15 through the following year. I’ve put some pictures and thoughts below. There are some really great readers who have put some excellent guides and resources out there on this topic. If you have any additional questions, please feel free to leave a comment below and I will do my best to answer them.
  • Another good snippet: The best way to cheat on taxes is  to set up an LLC or other tax-sheltered company that makes up for your sloth in paying business taxes. By doing this, you can deduct the business expenses and pay your taxes at a much lower tax rate, while also getting a tax refund. So, for example, if your net operating income for 2014 was $5,000 and you think you should owe about $2,000 in taxes for 2015, I suggest you set up a  S-Corporation   for 2015 that only owes $500 in taxes. Then, you can send the IRS a check for the difference between the $2,000 difference you owe them and the $5,000 net operating income for 2015.

ASCOS

  • Finish first pass? Done! And sent to Antonio!

shortcuts

Shortcut Learning in Deep Neural Networks

  • Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today’s machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distil how many of deep learning’s problem can be seen as different symptoms of the same underlying problem: shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a common characteristic of learning systems, biological and artificial alike. Based on these observations, we develop a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.

Phil 4.3.20

Temp is up a bit this morning, which, of course, I’m overreacting to.

Need to get started on State information from here: https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv

Generated some favicons from here: https://favicon.io/favicon-generator/, which, of course we didn’t use

Getting close to something that we can release

GOES:

  • Update Linux on laptop and try Influx there. Nope. The laptop is hosed. hosed
  • Grabbing another computer to configure. I mean, worst case, I can set up the work laptop as an Ubuntu box. I’d love to know if Influx would work FIRST, though. Looks like I have to. My old dev box won’t boot. Backing up.
  • Installed Debian on the work laptop. It seems to be booting? Nope:
  • I guess we’ll try Ubuntu again? Nope. Trying one more variant.
  • Trying lubuntu. It uses different drivers for some things, and so far hasn’t frozen or blocked yet. It works!
  • And now the Docker version (docker run –name influxdb -p 9999:9999 quay.io/influxdb/influxdb:2.0.0-beta) works too. Maybe because the system got upgraded?
  • 11:00 IRAD Meeting
    • Send note about NOAA being a customer for simulated anomalies for machine learning

Phil 3.30.20

Today’s study in contrasts: Italy and the US:

COVID-19 projections for the US, from the The Institute for Health Metrics and Evaluation (IHME):

IHME

Work on converting the ETS json file into spreadsheets to evaluate thresholds and labels – spreadsheet conversion is working. done! Now I need to figure out what those ETS parameters do!

Add a short bit to the D20 writeup that explains why linear interpolation isn’t the best option, and why we went with ETS – done

Work with Zach to get the website up today – working

Work this article into the exploit-space writeup: Why Is Cybersecurity Not a Human-Scale Problem Anymore?. Wow, actually, the company (Balbix) that was founded by the author (Gaurav Banga) seems to be doing most of what I was going to write about. Sent Darren a note to see if I should continue

Got a note from ProQuest saying my file needed to have blank pages at the beginning and end of the document. Fixed. And accepted!

  • Congratulations. Your submission, xxxxx has cleared all of the necessary checks and will soon be delivered to ProQuest for publishing.

Ok, back to Docker and building an InfluxDB image. Wow, that seems like a lifetime ago I was doing this

  • To save a custom image, create the container from a base image and then docker save image_name > image_name.tar. This puts it wherever you run the command in the system, Linux or Windows

#COVID-19 meeting at 1:30 today – proposal’s in. We have twitter data from January

SDaaS meeting at 4:00 today – postponed

Phil 2.14.20

7:00 – 8:30 ASRC GOES

This document describes the Facebook Full URL shares dataset, resulting from a collaboration between Facebook and Social Science One. It is for Social Science One grantees and describes the dataset’s scope, structure, fields, and privacy-preserving characteristics. This is the second of two planned steps in the release of this “Full URLs dataset”, which we described at socialscience.one/blog/update-social-science-one.

Judging Truth

    • Deceptive claims surround us, embedded in fake news, advertisements, political propaganda, and rumors. How do people know what to believe? Truth judgments reflect inferences drawn from three types of information: base rates, feelings, and consistency with information retrieved from memory. First, people exhibit a bias to accept incoming information, because most claims in our environments are true. Second, people interpret feelings, like ease of processing, as evidence of truth. And third, people can (but do not always) consider whether assertions match facts and source information stored in memory. This three-part framework predicts specific illusions (e.g., truthiness, illusory truth), offers ways to correct stubborn misconceptions, and suggests the importance of converging cues in a post-truth world, where falsehoods travel further and faster than the truth.

       

 

  •  Dissertation
    • Practice! 52 minutes, 57 seconds
    • Maybe meeting with Wayne? Nope
  • Pack, move, unpack, setup
    • Bring ethernet cables! done
    • Moved out – done
    • Moved in – not done, but ready to unpack
  • Recovered my information for GSAW and TFDev
  • Write quick proposals for:
    • cybermap – done
    • Synthetic data as a service – done
    • White paper – kinda?

Phil 1.10.20

7:00 – 4:30 ASRC PhD, BD, GOES

  • Dissertation
    • Stampedes are a form of runaway attention, and precision/recall aid that process
    • Starting on forward. Using the Arab Spring and GamerGate as the framing
  • 11:00 VOLPE Meeting
    • Pursuing the resilience proposal was well received. Next, go up and meet with the folks?
  • Install card – done! Passed the smoke test

Phil 12.24.19

ASRC PhD 6:30 – 9:30

  • The Worldwide Web of Chinese and Russian Information Controls
    • The global diffusion of Chinese and Russian information control technology and techniques has featured prominently in the headlines of major international newspapers.1 Few stories, however, have provided a systematic analysis of both the drivers and outcomes of such diffusion. This paper does so – and finds that these information controls are spreading more efficiently to countries with hybrid or authoritarian regimes, particularly those that have ties to China or Russia. Chinese information controls spread more easily to countries along the Belt and Road Initiative; Russian controls spread to countries within the Commonwealth of Independent States. In arriving at these findings, this working paper first defines the Russian and Chinese models of information control and then traces their diffusion to the 110 countries within the countries’ respective technological spheres, which are geographical areas and spheres of influence to which Russian and Chinese information control technology, techniques of handling information, and law have diffused.
  • Wrote up some preliminary thoughts on Antonio’s Autonomous Shuttles concept. Need to share the doc
  • Listening to World Affairs Council, and the idea of B-Corporations came up, which are a kind of contractual mechanism for diversity injection?
    • Certified B Corporations are a new kind of business that balances purpose and profit. They are legally required to consider the impact of their decisions on their workers, customers, suppliers, community, and the environment. This is a community of leaders, driving a global movement of people using business as a force for good.
    • Deciding to leave this out of the dissertation, since I’m more focussed on individual interfaces with global effects as opposed to corporate legal structures. It’s just too tangential.
  • Dissertation
    • H3 conclusions – done!

 

Phil 4.12.19

9:00 – 5:00 ASRC TL

  • Finished the BAA white paper(?), and asked for hours to write the full paper for the Symposium on Technologies for Homeland Security
  • These are appropriate:
    • Meaningful Human Control over Autonomous Systems: A Philosophical Account
      • In this paper, we provide an analysis of the sort of control humans need to have over (semi)autonomous systems such that unreasonable risks are avoided, that human responsibility will not evaporate, and that is there is a place to turn to in case of untoward outcomes. We argue that higher levels of autonomy of systems can and should be combined with human control and responsibility. We apply the notion of guidance control that has been developed by Fischer and Ravizza (1998) in the philosophical debate about moral responsibility and free will, and we adapt it as to cover actions mediated by the use of (semi)autonomous robotic systems. As we will show, this analysis can be fruitfully applied in the context of autonomous weapon systems as well as of autonomous systems more generally. We think we herewith provide a first full-fledged philosophical account of “meaningful human control over autonomous systems.”
    • The following is the preprint PDF of our paper on driver functional vigilance during Tesla Autopilot assisted driving: Human Side of Tesla Autopilot: Exploration of Functional Vigilance in Real-World Human-Machine Collaboration. It is part of the MIT-AVT large-scale naturalistic driving study
    • What I Learned from a Year of ChinAI
      • Finally, Chinese thinkers are engaged on broader issues of AI ethics, including the risks of human-level machine intelligence and beyond. Zhao Tingyang, an influential philosopher at the Chinese Academy of Social Sciences, has written a long essay on near-term and long-term AI safety issues, including the prospect of superintelligence. Professor Zhihua Zhou, who leads an impressive lab at Nanjing University, argued in an article for the China Computer Federation that even if strong AI is possible, it is something that AI researchers should stay away from.
  • And so ends a long, hectic, but satisfying week.