When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.<p>We already had AI proof education.
I personally dislike placing a heavy emphasis on exams. Assignments/projects have been consistently the most enjoyable and rewarding parts of the courses I've taken so far in university.<p>It's a shame that they are also way more susceptible to cheating with AI.
I went to college as a MechE so unsure if compsci was different. But overall, all the “fun” projects were labs. We have three semesters of hell and all 3 semesters had 2-3 labs, and we write 20 pages or so for EACH lab a week (usually a team of 2-3).
Also way more susceptible to cheating in traditional non-AI ways. And your mark ends up depending a lot on how much time you have to invest independent of how good you are at the course material.<p>Assignments and projects are great for learning, but suck for evaluation.
I really appreciated classes where there was rapidly demising returns to time spent :)<p>Another example, lit classes where the grade is based on time limited, open book exams, hand written in "blue books"<p>Read the book, pay attention in class, spend 90 min writing an essay, and you are done.
is evaluation that important? ultimately if you can't do the work you're only cheating yourself in the long run...
Part of the purpose for evaluation is to provide feedback. I'm not going to claim that the form of feedback is great, but it does offer motivation to improve.<p>The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
That is the traditional view, the view of those who want to improve their own knowledge and abilities, and presumably the view of those who would like to consider the degree to be a meaningful credential.<p>However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
Yes. I care that the work I've done and what I've learned is actually good and correct. Vibes-based learning/anything is valueless.
Then I suppose we can go back to having computer labs that can only access white listed domains and other study materials. Students code there to ensure no cheating.
The labs I was in weren't connected to the Internet at all, only a local intranet. Though, they were all running pre-oracle solaris if memory serves, so I'm probably dating myself a bit.
Today just teachers walking around during an exam instead of browsing on their phone would do wonders…
Writing programs by hand is something I had to do too. Compete waste of time
[dead]
Reading all these comments, I feel like US universities are a joke.<p>I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
No projects, no labs, no teamwork, no papers?<p>What a narrow set of skills to send into your economy.
Did you never have to write a research paper?
My school couldn't afford typewriters in the 1980's and early 1990's.<p>We wrote assignments by hand using a pencil or pen.<p>Is that really complicated?<p>When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.<p>My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.<p>I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.<p>The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.<p>And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)<p>TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Typing as a service is a whole cottage industry on Etsy.
Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.<p>However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)
arms race....<p>oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward <i>simulating</i> mistakes and typing patterns to make them seem more human-like.<p>In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.<p><a href="https://en.wikipedia.org/wiki/Loebner_Prize" rel="nofollow">https://en.wikipedia.org/wiki/Loebner_Prize</a>
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
Take home exams were very common when I was in school, which was before you could get answers on the internet. After internet answer and cheating sites came along, a professor would have to either not care and let cheating run rampant, or struggle to constantly make unique new kinds of take home questions somehow. AI has basically killed that option too.
Did you by any chance graduate before the COVID-19 pandemic?
I loved take home exams because they allowed me to study before hand but not have the insane pressure and condensed studying required for exams in the classroom. Even though they were normally much harder and longer, I liked them. I felt I learned much more through them because I could take the time to understand concepts I had missed without feeling the time pressure of in-person exams.<p>It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
I used to make my classes 60-80% project work, 40-80% quizzes all online.<p>I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.<p>I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.<p>Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.<p>We'll see.
I always preferred the "you get some grades along the way to gauge your progress but the lion's share of the weight went to the proctored exams" method unless the lion's share of the normal work was also proctored anyways (at which point it doesn't really matter how it's done).<p>The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
So at 50%, someone who uses AI to get 100% of the homework grade will earn a D (sometimes passing) if they can get at least a 20% on your quizzes, and a C (always passing) if they get at least a 40%. Did you make your exam so difficult that students who truly didn't learn the material earn less than 20-40%? Because if it was, say, multiple choice questions with four possible answers, then you can expect them to earn at least 25% just by chance.
The last point is very interesting and might keep universities relevant.
A hand-written essay in class would seem to be a workable mechanism for a student to demonstrate an ability to reason on their own about a subject.<p>One of my best college professors would review such essays in-person, one-on-one twice each semester.
When I was in college, your grade fully depended on the oral exam/debate with the professor. Everything else was but the entry ticket.<p>Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
I think if your university doesn't do in person exams with pen and paper then the degrees it hands out are not much evidence of anything.<p>If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.<p>I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
Better dust off that old AlphaSmart!
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
Depends on your measuring stick. Cheating themselves out of an education? Yep. Cheating themselves into a credential -> job - the status / remuneration of which is almost entirely divorced from the quality of the education, being aligned rather with the name of the organization on the diploma.<p>Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
Well from a certain perspective they are also hurting the schools reputation, the programs reputation, and ultimately their fellow students.
> If students cheat they hurt only themselves<p>This statement is more defensible after removing “only”. If it “only” hurt the cheaters, there would be no need to police cheating at all.
The thing is, when colleges don't test students' ability properly <i>before</i> issuing a credential, employers start testing job applicants' ability <i>after</i> they've received it.<p>And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
This is untrue. Students who graduate without actually absorbing knowledge as laid out in the curriculum devalue the degree when they show up in the workforce lacking that knowledge. This is part of why new grads are undesirable job candidates, there’s a chance you are paying a higher wage for someone who may not have learned anything.
They hurt other students who worked hard for the degree. They hurt the reputation of the school and the utility of the degree as a credential.
When i attended university (almost a decade ago i guess, time flies) we didn't have a single exam on the computer. All exams were on paper or oral, most were without notes too. Computer science does not require computers.
This is usually true, but it is also true that some classes are graded "on a curve" and so grade inflation could hurt people who are honestly doing work. Also, cheaters tend to suck all the air out of a room. For example, my I.T. instructor designed a really nice oral quiz slide-show for the entire classroom. I found it a few hours before the class, I watched it in its entirety, and then when he tried to run it live, I spoilered all the answers before any other student could answer. I wasn't strictly cheating, but I wasn't being fair to my classmates' learning process, either.
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often<p>Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.<p>When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
If AI can do the work, maybe the test should be more focused on what AI can’t do? This is like anyone still doing a traditional coding interview with leetcode problems just because they haven’t yet done the work to figure out what to test for in a world where Claude Code exists.
The goal of the educational process isn't the test paper, it's the learning.<p>Gyms aren't redundant because tractors exist.
Gyms are a great example actually because tractors exist to do the economically useful work. You now optionally go to the gym to benefit from fake labor that used to be the side effect of useful work. The fake labor is now what colleges are trying to sell, and it's going to kill them.
Gyms predate tractors.
3,000 years ago, physical labor was a component of most jobs. Today gyms are for people who can afford to attend them and don't have a day job that naturally exercises them through labor. People exercising purely for health benefits, and not because the strength benefits them in their job and in other facets of their life, is new.
[dead]
Huh? The gym analogy doesn’t even make sense. People didn’t go to gyms when they were farming with oxen. Gyms are popular now precisely because tractors exist and you don’t need manual labor to farm anymore but people still need the physical exercise for their health. Society has adapted to the arrival of new life-changing technology. Our education system needs to adapt to new technology like AI too. You can probably uplevel a lot of courses and cover a lot more interesting topics than before and teach real application of things you learned aided by AI. Just like when I was doing a CS major 20 years ago, they didn’t spend too much time teaching me assembly programming beyond 1 or 2 lectures (they let me use a compiler for programming assignments!).
There are plenty of things AI can do that students still benefit from learning.
This is like saying you shouldn't learn to add because we have calculators.
Maybe instead of trying to teach around the abacus, we need to teach the higher level things you can reach with MATLAB.<p>We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.<p>LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.<p>Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
I’m confused about too many things being measured at once. Is Phelps banning AI to ensure her students are fit to pass terminal examination? And doing so to ensure that her class has a good pass rate, proving she is a good teacher and can keep her job? What if her cohort are particularly dumb? Is she incentivized to make it <i>easy</i> to pass her classes to get that A you paid so much for? Or <i>hard</i> or make that A worth something?<p>My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
My impression was she just brings the typewriters into class as a one-day novelty thing per course, not that it becomes the norm for the whole semester. The goal is to give the students a taste of what the old-fashioned way is like, to get them thinking about it.
... meanwhile, all these students graduate, can't find jobs and become plumbers or bricklayers.
Just have them write it out. “Ain’t nobody got a goddamn typewriter”.
Pfft, just grab a teletype and run lpr -P ttyUSB0 ai_generated_report.txt ;-)
Next up: allow slide rules on exams.
[dead]
The college instructor might as well ban calculators and use abacuses then.
Might be an unpopular opinion in this thread, but college was made worthless for most degrees as soon as the internet got popular and silly performative shit like this is the death knell. College is about learning how to work in an industry. I'd predict an uptick in trade schools and other hands-on work like medicine, and a continuing downturn in so-called formal education for anything white-collar, programming included. Students are customers. Businesses are going to use AI going forward. No reason to waste time on this.