AI tools such as ChatGPT prove to be efficient, accessible resources for education and development, especially for software engineers, as long as people don’t overdepend on them—but rather turn to them for assistance. In my experience, learning with AI is best done via conversation; the model explains a topic to you, you respond with questions, the model addresses them, and so on. Not a big fan of that copy-and-paste mumbo jumbo.
Throughout the semester, I’ve mostly used ChatGPT to get help understanding new concepts and be led through installations, but towards the end, I began to make better use of GitHub Co-Pilot. ChatGPT, which I use for practice for almost every class I take, is my favorite.
Now, how exactly did I incorporate AI into my work in this class?
I usually opted to avoid using AI during the experience WODs because I felt it was important to challenge myself by solving them on my own first, even if I didn’t finish in time. Plus, most of the experiences provided video solutions made by the instructors, so if I ever needed help, I often just referred to those. Sometimes, however, I turned to ChatGPT if I was completely stumped and needed someone to point out what I doing wrong. Usually, it was stuff like, “I know I have to use this function, but I don’t know why it’s working as intended” along with my code.
I tried not to use AI too much during the practice WODs because I wanted to prepare myself for the real deal, but there had been quite a few times when I found myself absolutely stuck. When this happened, I gave in and turned to ChatGPT so I could move forward, especially if I was working on a more complicated WOD and there was no way for me to progress otherwise. For instance, something as simple as not realizing I forgot to put ‘use client’ at the top has messed me up before, and ChatGPT pointed it out. I’d go, “What’s wrong with my code? Why isn’t it working?” and it would point out the little error. In those moments, it just made more sense to get unstuck and focus on practicing the main content.
Likewise, I followed the same approach for the in-class WODs. There have been a few times when I forgot a single line that totally prevented my code from working and couldn’t figure out the problem on my own, so I went to ChatGPT for help by pasting my code and asking it why I was getting errors. I mean, I always tried to solve it first, but after a certain point, it was just wasting precious time—and my little issues wouldn’t even be the “heart” of the WOD. By spending too much time searching for the bug, I was losing out on the “real” practice.
I don’t like to use AI for my essays. Even outside of class, I’m super into writing and sharing my ideas, so I don’t feel the need to rely on AI to articulate my thoughts. I occasionally go to ChatGPT for quick grammar feedback on personal projects, but the writing process for this class has been pretty straightforward.
I definitely used AI all throughout working on the final project. I didn’t want to just feed the model prompts and have it regurgitate the answers for me, but when you’re making an application, there is a lot to keep track of. I mean, the WODs were decent practice, and I at least got acquainted with the different procedures; retaining all the information at once is a different story.
I’d say I mostly went to ChatGPT for things like reminders on different CSS properties since I did a lot of the visual design. For more specific issues, I went to GitHub Co-Pilot since it was able to easily refer to my code. The most difficult job I had was setting up the dynamic page that read from the database. We didn’t have an experience that covered this topic, so I had to really rely on GitHub Co-Pilot to teach me. I just explained the situation I described above, and it gave me a walkthrough.
I’m always happy to talk about this because, like I’ve said, I’m a big advocate for using ChatGPT as a tutor. I even use it to review material before exams—so of course I’ve used it to study for WODs.
I tell it something like, “Can you help me learn about [topic]? I want to ask you questions along the way,” and we have a pleasant Q&A session. I just find it super helpful to have “someone” who can provide 24/7 help. Still, always make sure to double-check your sources because ChatGPT can be and has been wrong. It’s almost like a real tutor; most of the time it’s accurate, but you should never blindly trust what it tells you.
I never really asked for help or answered questions about what we were learning in the experiences. Most of the time, if I had something to ask my peers or instructors, it was more about logistics—like due dates or where to find something—rather than questions that actually dealt with the course material itself.
The same goes for smart-questions. I didn’t really run into moments where I felt stuck enough to formulate and post a detailed technical question. Things were either straightforward or something I could figure out on my own, so it just didn’t feel necessary to involve AI or even other people most of the time.
I usually read up on articles first when looking for examples, but ChatGPT is a good tool as well. I’m slightly more inclined to trust the articles since they are more likely to be reviewed, edited, and definitely correct, whereas I have to be more careful with the information ChatGPT fetches for me.
I specifically recall asking ChatGPT to give me examples on functions like mapping during the functional programming module. That was pretty helpful since there were a lot of new functions I had to familiarize myself with.
AI isn’t my go-to when it comes to understanding what blocks of code do. I try to analyze it myself first; after all, breaking stuff down is the more helpful route if your goal is to rebuild it. However, if I ever found myself stuck, then sure, I would ask ChatGPT to explain what each line did. I mean, literally, I’ve pasted blocks of code and told it, “Can you explain what each line does one by one?” This method was helpful when I didn’t fully get the WOD solutions provided in class; asking for explanations gave me the opportunity to walk through the solutions more slowly.
I prefer to try and understand the code I am writing rather than copy-and-pasting whatever the model generates, but like I have always emphasized, I still think it is acceptable to regularly refer to ChatGPT as long as you maintain that your conversations are learning experiences.
As I mentioned earlier, I needed the most help during the final project while coding the dynamic page, but since we hadn’t really covered the topic in previous experiences, I had to learn on my own. Some of the code I copied, but I made sure I understood what Co-Pilot was telling me. Blindly copying hinders growth and critical thinking—but coding is more about problem-solving than memorizing all the syntax in the world. Having an assistant, not a robot slave, makes things ten times more convenient.
I don’t use AI to document my code. Commentation is personal—they are for me to look back on and understand, so what I write is suited for me specifically. I know myself better than ChatGPT does. Of course, my comments should be understandable for others who view my code as well, but I think they are already concise enough as is.
I think we’ve all been there: spending hours and hours searching for that one little bug that’s screwing up the entire thing. It’s like looking for a needle in a haystack. As a programmer, having AI there to check my code makes life a million times easier. There have been times when I’d been working for hours and just couldn’t be bothered to spend too long digging through all my files to find the issue. I’d just go, “What is causing this error?” and it’d give it to me lickety split. Real cool.
I suppose models like ChatGPT can be helpful for organization. I distinctly remember telling it, “Hey, my teammate hasn’t done his part in our group project yet. How can I still get some work done if my part depends on his?” This isn’t a super technical aspect, but I was able to be more productive, so it’s still a win in my book.
Well, it’s safe to say that AI has definitely enhanced my learning experience. It’s helped me understand tough concepts more quickly, especially when I can talk through problems with it like I would in a tutoring session. I really appreciate that ChatGPT can act as a guide without handing me all the answers. That said, it will never replace actual learning—you still have to put in the effort. Simply copying will not suffice. But overall, it’s made studying software engineering less frustrating and a lot more doable.
Aside from generative AI, other AI technologies are used in software engineering to improve efficiency. For example, machine learning helps predict issues in systems before they happen, and automated testing tools can find bugs faster. AI is also used to monitor apps in real-time, allowing developers to fix problems quickly. These applications help make development quicker, more accurate, and less prone to errors.
One challenge I faced with AI in the course was the occasional inaccuracy or confusion in the answers, especially when I relied on it for more complex concepts. In that case, it was better to turn to other sources on the internet or ask my peers and instructors for help. Luckily, this didn’t happen too often.
There are plenty of opportunities for AI to be more integrated into software engineering education. For example, AI could provide personalized tutoring sessions that adapt to a student’s learning pace, making it easier to grasp difficult concepts. AI-powered code review systems could also be used to give immediate feedback, helping students improve their skills in real-time.
AI can never replace traditional teaching, but it sure does have its perks. “Real life” learning gives you structure, solid foundations, and guidance from someone who knows the material—and depending on the instructor, empathy as well as creative, outside-of-the-box approaches can make a huge difference. AI, on the other hand, is more flexible and fast. It’s like having a tutor who is always available 24/7.
Personally, I think the best learning happens when you use both. Teachers help with deeper understanding and feedback, while AI fills in gaps, debugs quickly, and explores things at your pace. For me, combining the two has made learning more engaging and practical.
AI is constantly being developed; it will keep getting better at personalizing learning—maybe even automatically adapting to one’s style or level. I’d be very surprised if it didn’t become more common to see it built into course platforms.
The biggest challenge is making sure students don’t rely on it too much. Unfortunately, not enough people have the willpower—or even care enough—to not overdepend on AI. It is literally destroying people’s ability to think critically, and I am worried for future generations, given how many youngsters use it to plagiarize. Ideally, AI should guide learning, not replace it. The user mindset has to shift.
AI was a delight to use in this course, especially for learning new concepts, debugging, and reviewing material before WODs. As long as it’s used as a tool and not a crutch, it really does boost learning. For future courses, I’d urge students to treat AI like a tutor: ask questions, have conversations, and focus on understanding instead of copying. That way, you are not replacing effort; you are supporting your own growth.