This study investigates the effectiveness of teacher, AI-generated, and hybrid teacher-AI feedback on university students’ English writing performance in Hong Kong. Using a mixed-methods approach, the research examines the impact of different feedback types on student motivation, feedback quality, and essay revisions. A total of 1,267 students participated in an experimental design, with essays evaluated across three groups: human feedback, AI feedback, and hybrid feedback. Quantitative findings indicate that human feedback led to the highest essay score improvements, followed by hybrid feedback, with AI feedback showing the least improvement. Thematic analysis of student interviews revealed preference for human feedback, citing personalisation, specificity, and trust as key advantages. While hybrid feedback showed some benefits, students were less motivated by it compared to human feedback. The study highlights opportunities and limitations in integrating AI into feedback practices, emphasising the need for structured human-AI collaboration rather than full automation. These findings offer valuable insights for educators, policymakers, and AI developers seeking to enhance feedback mechanisms in English language learning contexts.