A high point of my career

In the previous story, titled Some jobs will take everything you give them, I described what I called “undoubtedly the lowest point of my career.” So I decided, for the sake of balance I suppose, to share a story about one of the high points. If it comes across as though I’m boasting or blowing my own horn, please forgive me, that’s not my intention. Naturally one risks putting their own work in too positive a light with a story about a career high point.

As a graduate student, I worked on topics in quantum computing that included quantum cellular automata, quantum finite automata, and space-bounded quantum computation, which were very much off-the-beaten-path in quantum computing at that time. I was definitely interested in those things (and still am), but a big part of choosing to work on them was that others hadn’t so much, and that gave a student like me a fighting chance to contribute and discover something new. I was able to publish papers on this work, and would get invited to speak at quantum computing workshops from time to time, but I wanted to do something somehow more relevant and of broader interest. Then, when I graduated and began working as a postdoc at the Université de Montréal in 1998, I started thinking about quantum interactive proof systems. This turned out to be a topic that would give me a lot to think about for the two decades that followed.

In theoretical computer science, interactive proof systems are an abstract computational model involving an interaction between a hypothetical prover and a verifier. The prover and verifier have different goals: the prover’s goal is to convince the verifier that some given statement is true (whether or not it actually is true), while the verifier’s goal is to check the validity of the prover’s argument — and to not be fooled into believing the statement is true if it happens to be false. You could say that the practical importance of this model is open to debate, but there is no questioning its importance in theoretical terms. It has played a truly salient role in the development of complexity theory, and it’s also important in theoretical cryptography. To be clear, these are claims about the classical version of this model, whereas I was working on the quantum version of it, which previously hadn’t really been studied at all.

I discovered something interesting: ordinary (meaning classical) interactive proof systems could be parallelized by making the model quantum, in the sense that long conversations between a classical prover and verifier could be crunched down to just three quantum messages by a quantum interactive proof system. There is no way to do this in a purely classical way unless something unexpected in complexity theory happens: the so-called polynomial-time hierarchy collapses. This is all to say that this was a bonafide interesting result that illustrated a new way to take advantage of quantum information. For the first time, I felt like my work had relevance. This was in the fall of 1998.

In January 1999, I travelled to DePaul University for the AQIP ’99 (Algorithms in Quantum Information Processing) workshop. The “A” for algorithms was later shed, and the workshop became the premier conference on the theory of quantum computing known as QIP — which is coincidentally the name of the complexity class associated with quantum interactive proof systems that I now had something to say about. I was not invited to speak at the workshop, and did not plan to speak, but I did bring some printed copies of the new paper about quantum interactive proof systems that I’d recently finished. I shared it with several people, and the organizers kindly squeezed in an extra talk slot for me to present it toward the end of the workshop. This would not likely happen now because QIP has become much more formal and competitive, but back then there wasn’t a program committee and talks were by invitation, so it wasn’t so unusual.

One of the people I shared my paper with was Alexei Kitaev — a name that anyone who works in quantum computing recognizes immediately. He was already a legend, even back then, and this was the first time I’d met him. He was giving a talk on the QMA-completeness of the local Hamiltonian problem at the workshop, which is now a cornerstone result in quantum complexity theory. I was really excited to hear about this result, which wasn’t directly about quantum interactive proof systems but was closely related, so I was pretty sure he would be interested in my new work. I gave him a copy of my paper and we talked for a bit, in a short conversation that a few others joined and left in a typical coffee break sort of fashion.

The next morning brought another coffee break, and I was milling around trying not to look awkward when Kitaev came up to me appearing kind of tired but excited. He’d been looking for me! He explained that he was very tired because he’d stayed up all night studying my paper, and he said that it was a beautiful result. I’m not exactly sure how much of it sunk in at that moment and how much of it I came to appreciate later — but when I look back now it seems like it came out of a dream. One of the greatest minds of our time decided to stay up all night studying my work, and found beauty therein.

Now, there really isn’t much point in reading that old paper any longer. Shortly after the workshop, Kitaev invited me to come to Caltech to work with him for a couple of weeks, and in that short time we were able to prove a great deal more about quantum interactive proof systems than I had on my own, completely subsuming my original result in the process. (No problem at all, I still published the paper, as just about any academic researcher would have.) It turned out that not only could classical interactive proof systems be parallelized through quantum information, but so could quantum ones, which is a more general result. And that was just one of several new things we proved. Among the other things was an upper bound on the computational power of quantum interactive proof systems that, to my knowledge, was one of the very first times semidefinite programming was introduced into quantum computing.

I was both awed and humbled by the power of Kitaev’s brain, but not in a negative way. By witnessing greatness, those of us who are more ordinary are offered a glimpse of that to which we may aspire.

Some jobs will take everything you give them

From January to December of 2021, I served as the Interim Executive Director of the Institute for Quantum Computing at the University of Waterloo. Interim means temporary: it was a one-year appointment. But all along there was an expectation that I would move into the role on a “continuing” basis, meaning a five-year term without the word interim to let people know not to take me too seriously. Initially I thought this was likely, and others at the university thought the same.

The institute had been searching for a director for several years. At the beginning of the search, there had been great expectations and lofty visions of the hero and savior that would take on the job, but the search had failed repeatedly. When the pandemic hit and lockdowns followed, the need to appoint someone internally came sharply into focus. We played a game of tag for a little while and I became it. There was a part of me that wanted to do it and part of me that didn’t, but one thing is for sure: I had absolutely no idea what the job actually entailed.

Once I started, I was quickly overwhelmed. All of a sudden I had 50 staff members under me, compared with zero at all points prior, and an annual budget with twice as many figures as I’d ever had to worry about. Believe it or not, this part actually wasn’t so bad because the senior staffers at the institute (who are the true heroes in this story) mostly took care of these things. But there were a whole lot of other things on my plate, and I really didn’t know what I was doing. I had no training to draw on; I was trained to prove theorems, but there were no theorems involved. Mostly it was about money, building relationships with government and industry to get it, and fighting with others at the university about it. My days were packed with meetings, and in the beginning my goal was merely to get through each one without looking like a complete idiot. There were definitely a few for which I did not succeed.

But the real problems started once I gained my footing, because of how hard I leaned into the job. In fact, I gave it pretty much everything I had. I wanted to succeed, and that meant the institute succeeding — and it wasn’t long before there wasn’t really any difference between IQC and my life.

By July, things had become completely insane. There was so much I needed to get done that there was no hope. I hadn’t even thought about research in 6 months. Then the shit really hit the fan, and I thought I’d lost a $25 million line of funding that the institute depended upon. I would be the first director in nearly 20 years to fail to secure that support. As a result, I crashed and burned. I will spare you the details of what that looked like, except to observe the tragic irony of having executive signing authority over a nine-figure trust and simultaneously being physically incapable of forcing myself to stop crying. This was undoubtedly the lowest point of my career.

People that care about me helped me, for which I am both fortunate and grateful. When my assistant learned what was going on, she wiped my calendar clean so I could take a week off to recover. The weird thing is that I got the impression that, as the assistant to a couple of directors before me, as well as several department chairs prior to that, she had seen this before. At some point I realized that, with the pandemic and everything else, I’d neglected to take any vacation in about 18 months, and that must have been a contributing factor. So keep this in mind: Sometime you have to take a break. And if you don’t, you’ll have to take a break.

I did eventually recover (although it took longer than a week), and in the process I came to the realization that taking this job on a continuing basis was simply not something I could do to myself. I felt terrible about letting the institute down, just when it was finally about to get a new director, and it was very difficult to tell my colleagues that I would not be taking the job on a continuing basis — but to say that a weight had been lifted off my shoulders would be an understatement.

There were more challenges and struggles to come in the second half of the year, and I wouldn’t say that I necessarily finished strong, but I completed the term I’d agreed to serve. A much more level-headed person than I stepped up, and I was able to hand the institute off to the next director in what I believe was a better state than when I had started. By the end, I’d lost about 15 pounds and half of one of my eyebrows fell out. (So in case you noticed this in one of my earlier videos on the Qiskit YouTube channel, that’s what happened to it.)

My communications director and guardian angel throughout this experience had warned me about this sort of thing right from the start: “This job will take everything you give it.” That was wise advice and I should have listened.

The problem with papers

Researchers write a lot of papers. In 2023, for instance, 8,616 papers were posted to the Quantum Physics (quant-ph) section of arXiv.org alone, plus an additional 3,310 cross-listings from other categories. Among all those papers were some really good ones to be sure — but I tend to think we’d be better off without most of them.

The reason why researchers write so many papers is clear: there are strong incentives. Researchers are rewarded for writing lots of papers and punished for not writing enough of them. If you want to ensure that your tenure application is denied, don’t write any papers! There’s also the issue of assigning value to papers (which often means noting where they were published and nothing more), but when decisions get made and resources get allocated, there’s no denying that you’re better off with a longer CV.

The fact that modern research is so highly specialized is a contributing factor. Most research papers aren’t understandable to most people, even to other researchers working within the same discipline. As a computer scientist, I’d be hard-pressed to understand all but the tiniest sliver of computer science papers. Likewise, most computer scientists wouldn’t understand the papers I wrote. Yet, we’re forced to judge one another as researchers. I served in roles that required me to do this — assigning numerical scores to my colleagues’ research performances over some period, for instance — and it doesn’t take very many people on a list to make it completely infeasible to dig in and really understand what they’ve done. As a result, publication venues become like a form of currency and you count’em up.

There are obvious shortcomings to this focus on quantity. For one, there’s an expectation for researchers to serve as editors, program committee members, and reviewers, which means time and energy invested into papers others write. The more papers there are, the less attention each one gets, resulting in a lower quality review system. A focus on quantity also clearly incentivizes lower quality papers, where each one tends to receive the minimal amount of work and/or the minimum number of ideas needed to get it published, and then it’s on to the next paper. Sometimes a single idea or technique gets spread out over multiple papers — not unlike a TV show spreading its plot over too many episodes. I’m not saying that everyone operates this way, but this is often how the game is played. And of course, this all fuels a publication industry that exploits researchers and then charges their institutions for access to papers, adding essentially zero value in return. Don’t even get me started on this.

Having left academia, there’s no longer any pressure at all on me to publish papers — and all I can say is that I’m quite happy to no longer be involved in this system. So no, I won’t review any more papers — but rest assured that I won’t be submitting any for publication either. Given that I reviewed probably ten times as many papers as I ever wrote, I hope we can call it even.

I don’t have a solution to the problem, I just think it would be great if there were an effective way to incentivize quality over quantity — to make people more likely to write papers with lasting value that are truly worth studying. I mean, I’m sure very few people know how many papers Claude Shannon wrote, and it really doesn’t matter at all. What matters far more is that with one paper he changed the world. Of course, most of us won’t reach that pinnacle of excellence — but shouldn’t that be what we strive for?

Good advisors and bad advisors

When I was a graduate student my advisor was Eric Bach, and he was a great advisor. He didn’t work on quantum computing — very few people did back then — but I first learned about quantum computing through a reading group on it that Eric set up. He encouraged me to work on what I found to be interesting and gave me the freedom to do that. And though he didn’t work on quantum computing specifically, he knew a hell of a lot about mathematics and computation, and I always knew more when I left his office than when I’d entered.

Some graduate students are not so lucky. In various roles I held as a professor, including being a graduate program director and the director of an institute, I sometimes interacted with students in distress over their advisor’s treatment of them. Often they would cry. While I believe most professors I worked with treated their students kindly, some evidently did not.

I don’t know why some professors treat their students the way they do. I suspect in some cases they’re just trying to replicate the experience they had as a student, but they may not be able to do that. Or maybe the problem is that they are able. Some professors drive hard and expect that from their students in return, and forget that their students aren’t them. And some professors are just assholes.

In situations like this the power dynamic is completely imbalanced and students feel powerless. Switching to another advisor can be very difficult for multiple reasons, and there’s always the fear that the student isn’t going to be able to go anywhere without their advisor’s letter of support. The perception is that this person that’s making them miserable can end their career and everything they’ve worked for.

So what should you do if, as a student, you find yourself in this situation? I wish I had a good answer. My advice is to start by telling someone, like the program director for whatever degree program you’re enrolled in, or maybe an associate chair or dean in charge of graduate studies. Nobody can help you if they don’t know. Ask around — every program and department is different. Your department or school may have a graduate advocate or an ombudsperson with whom you can discuss the matter confidentially and can provide you with advice. Whatever happens, you don’t need to put up with it — there are many paths your career can take and your advisor doesn’t hold the keys to all of them.

And if you’re you’re deciding whom to work with as a graduate student, be sure to talk to people before making a decision. The right answer is not necessarily to go with the advisor or school with the bigger name. Talk to your would-be advisor and ask questions, and talk to their students as well. The importance of having an advisor that supports and encourages you, and has your best interests in mind, should not be underestimated.

A fatal-headshot scooping

In science, getting “scooped” generally means that someone else announces a discovery, maybe by posting or publishing a paper, that steals the thunder from something you’re working on. Maybe they reached the same conclusions as you but got there quicker — or maybe they discovered something more interesting that trivializes your work. Sometimes it’s not so bad and you can still get something out of it, like putting a paper out quickly and claiming an independent discovery or publishing an alternative way to reach the same conclusion. Other times there’s nothing to be salvaged — so you ditch everything and move on to something else. This is a story about a time I was scooped as a graduate student that falls into the second category.

I’d been working on something for a while — several months at least. It had to do with space-bounded quantum computation but the details aren’t really important for the sake of the story. It was very technical, and I’d built up a lot of mathematical machinery to make it all work. I was nearly done with a paper about it, and I’d used the results I’d discovered as the basis for a thesis proposal that I had to submit and defend as a part of my PhD degree requirements. That part went fine and my thesis proposal was accepted.

A professor I worked with knew Peter Shor and had arranged for me to visit him for a couple of days. Peter was at AT&T Research at the time, before he moved to MIT. I prepared a talk on what I’d been working on and travelled to New Jersey. I was excited to share my results and of course I also hoped to make a good impression.

About an hour or two before my talk, one of the postdocs working with Peter showed me a new paper he’d just learned about: Reversible space equals deterministic space by Lange, McKenzie, and Tapp. The title alone struck fear into my heart. I quickly read the paper, which is both simple and beautiful, and it probably took me about 90 seconds to understand how it worked. And I saw that it was a fatal headshot to my work. This wasn’t a case where I could claim the independent discovery of something or salvage an alternative proof — everything I’d done was essentially trivialized by this work and no longer worth the paper I’d printed it on. It was pretty devastating to be honest. As a graduate student, publishable results were few and far between and each felt precious. And this was more because it was going to be the basis of my thesis.

So that was bad — but then I had to give the talk. The reality was that what I’d come prepared to speak about was now trivial and pointless and I was basically just wasting everyone’s time. But I couldn’t give the talk I’d prepared and not tell them what I’d recently learned, so I explained the situation and went through what I’d prepared. The audience was mercifully small but it did include Peter. Eric Rains was there as well — Eric moved on to other subjects but his work on quantum information theory from this period is well known and had a significant impact on the field. Anyway, it was a hard talk to give and it surely did not impress, but at least it ended.

I have a memory of flying home feeling defeated and shell shocked, but after I returned home I just went back to work and started something new. I never once touched that nearly completed paper again, and now decades later I no longer seem to have it in my files. And everything turned out fine. I found new results and wrote a different thesis than what I’d proposed — and if anyone on my committee realized they didn’t mention it. Of course it also didn’t matter what people thought of me, that wasn’t the point of the visit.

I hope there’s some encouragement to be found in this story. Looking back now, I wouldn’t choose not to have had this experience — I learned from it. We tend to hide our failures and struggles as we focus on our successes and accomplishments, but we’re all subject to the luck of the draw and things don’t always go as we might have hoped. So if you face a setback you’re not alone. To my eye the only thing to be done is to put one foot in front of the other and move forward.

An early fail as an educator

In my first semester as a graduate student I taught introductory programming to around 25 undergraduates. This was just one of many, many sections of the same course. The hundreds of students enrolled were divvied up into small sections, and those sections were assigned to graduate student teaching assistants, many of whom (like me) had little or no teaching experience. I guess the reasoning was that if one section didn’t go well, only a small fraction of students would be affected. There was a weekly session for first-time teaching assistants, to provide guidance, discuss the curriculum, and so on — but it obviously wasn’t enough for me.

I performed abysmally. I didn’t take it seriously and didn’t particularly enjoy it — I wanted to be doing research instead. I remember uttering “I hate teaching” in class. Among other unforgivable transgressions, I provided an input set to a coding assignment that had a bug — I’d not even bothered to test it. How much frustration did that cause, I wonder?

I paid the price on the instructor evaluation at the end of the course, scoring at the very bottom of my entire department. I knew this because those scores were made public, including the names of the instructors. Everyone in my department would have looked at the bottom of the list and seen my name, kind of like looking at a car accident while driving by and being glad that it wasn’t them. “John is the worst TA I’ve ever had,” one student wrote. In retrospect I don’t know why, but those evaluations were a huge shock at the time. Somehow I hadn’t realized that I was so bad.

So I begged for another shot, and was offered an 8am class the next semester to make amends. I believe I did, at least to a new batch of students. I took it seriously, worked hard, and kept the perspective of the students in mind and treated them with respect. I doubled my evaluation scores, and I also found that I actually enjoyed teaching.

I think of this experience as being formative. When I was a professor I put a lot of work into being a good teacher, and have no doubt that this experience was a part of what motivated that. I always tried to put myself in the shoes of my students and never asked them to do something I hadn’t done myself to completion. I didn’t hide the day before the final exam, I held “all day” office hours from 9 to 5 instead. I memorized the name of every student in my classes and spent the hours I proctored exams going from one student to the next in turn, reciting their names in my head. I believe I managed to straighten myself out, and even received an award one year for outstanding teaching — one of the very few professional honors I’ve received and undoubtedly the one that’s most meaningful to me.

But I’ll never get that first class back. And to those that took that class with me, all I can say is that I am truly sorry.

Choosing what kind of scientist to be

Scientists often identify themselves with a particular discipline, irrespective of what they might actually be working on at that moment. In my case, although my work connects with physics and mathematics, I describe myself as a computer scientist. I’m definitely not a physicist and I don’t know enough about mathematics to be a mathematician (though I do like to pretend sometimes), but the real reason I consider myself to be a computer scientist is because that’s what I studied as a student.

I don’t recall making a decision to become a computer scientist, I just gravitated to what I thought was cool. When I was a kid I wanted to be surrounded by computers, like in Batman’s Bat Cave. I didn’t have any idea what those computers would actually do. My first experience with an actual computer was in school, around grade 6 or 7, programming a TRS 80 in BASIC. In high school I took computer classes and found that I was good at it, and eventually I had a couple of computers of my own to surround myself with. And there were computer games, which put hooks in me and reeled me in like nothing else. I have fond and vivid memories of experiences that were programmed for me by someone else.

I knew that I wanted to study computer science as an undergraduate before I started. I didn’t really know all that much about computer science, but it didn’t matter — I loved computers and never seriously considered a different subject.

As an undergraduate I took many computer science courses, but one of them really grabbed me and took my interest to a new level. It was a course on theoretical computer science: automata theory, grammars, Turing machines, and so on. Having taught such a course myself many times since then, I’m well-acquainted with the lack of interest many have in this subject, but to me this was the coolest stuff I’d ever seen in my life. The course was taught by Professor Ker-I Ko, who I thought was probably the coolest person I’d ever met in my life — like a wizard possessing arcane knowledge. One of the research results Ker-I Ko is well-known for is that, for any chosen level of the polynomial-time hierarchy, there exists an oracle relative to which the hierarchy collapses exactly to that level. I’m sorry, but it just doesn’t get cooler than that.

Sometime after the course finished, Ker-I Ko agreed to supervise me for a summer of independent study, and then allowed me to take a couple of graduate courses with him during my final year as an undergraduate — one on computability theory and one on complexity theory. I loved it all and wanted to learn more. Before this I took my studies seriously enough, but only worked as hard as I needed to get good grades. Now I was hanging out in the stacks reading everything about theoretical computer science I could.

As a graduate student I initially worked on computational number theory, which I found to be fascinating but wasn’t particularly good at. Then I discovered quantum computing. It was 1994, and Peter Shor had just discovered his now-famous quantum algorithms for factoring and computing discrete logarithms. I read Shor’s paper, along with every other paper on quantum computing I could find, and was totally hooked — quantum computing took coolness to an entirely new level. So I completely dropped what I was working on, started thinking about quantum computing, and haven’t stopped.

Sometimes students ask me what field they should enter, what classes they should take, what topic they should study as a graduate student, and so on. My response is to ask them what they find to be most cool. For me there was never really a choice to make — it was clear and had nothing to do with career prospects, getting a job, or anything at all beyond the subject itself. I couldn’t imagine working in a field that I didn’t find to be truly fascinating. Of course, what’s fascinating varies from person to person, so I tend to think that the only effective way to navigate is to follow your heart and do what you love.

And as educators we should not forget the importance of teaching others about the things that captured our hearts. Not everyone will be interested, but some will see what we saw — and perhaps their paths forward will be illuminated.