OpenAI Unofficial Self-Organizing Unconference Review

I had the distinct honor of working with a number of talented individuals at the unofficial unconference.

We were ambitious, maybe too much so, but I comfort myself by saying that incremental progress is progress nonetheless, and that it’s a worthwhile endeavor to report both our successes and our failures. The following is an account of what we tried and our reason for trying.

We began with the problem phrased as follows: “Given a high-level input description of a program, produce source code which meets the requirements.” We didn’t expect to solve this problem in a day, but figured that generating a dataset would be both feasible in the time frame and worthwhile. Collectively, we decided to produce tuples of problem descriptions, programs, and example outputs. The next question was one of output language: we wanted to generate examples in a language which was simple enough for a machine to learn and also usable by a human creating the programming problems. Python was the favorite language among the group, but had its limitations — it was a big language with lots of minutia. Learning to generate Python would require apprehending list-comprehension and more subtlety syntactically than we felt was strictly necessary. Instead, we opted for a high-level Turing-complete computational graph representation of a language (basically, an AST). The graph could be “compiled” to a description or “compiled” to Python and then run, giving all the required outputs of the problem.

The next issue was generating programming problems of sufficient variety to be useful. Too few examples would basically guarantee overfitting, so the manual construction of programming examples was out. Too repetitive would mean the program would ignore the details of the English language and would pick up on the structure of the sentences. That seemed okay at the time — we figured we could remove details without too much effort to make problems more ‘programming-challenge-esque’. It became apparent quickly that selecting which details to omit to frame the problem was almost as big a challenge as the original.

Our graph approach was producing, “Given variables a, b, c, return a*c – b if c > a else c*b;” Not a particularly interesting problem, since it basically involves a direct translation from the description to machine code, and we wanted to avoid building, “a compiler, but with ML.”

The remainder of the day was spent first trying to construct more elaborate program descriptions and more subtle, interesting problems. The latter was spent in an entirely alternative mode where we decided to try and use an autoencoder to learn the Python AST and an autoencoder to learn the structure of English, then bridge the two using our limited dataset scraped from Project Euler and from OpenAI’s sample dataset.

I’m not yet sure what we’d do differently. It seemed we made the right choices given the available information most of the time — the biggest oversight to me remains misjudging the quality of the English output graph. I have to wonder if that could be improved through clever graph optimization tricks. Perhaps that’s a project for another hackathon.

On a personal note, I was overwhelmingly impressed by the depth of skills expressed by the persons there. As much as I don’t enjoy being the least smart person in a room, that’s a good state for a programming collaboration. I look forward to working with everyone again after I’ve had a chance to study some more. Maybe a lot more.

Thank you to everyone who participated. It has been a pleasure.

Comments are closed.