This paper investigates how natural language communication with an AI agent affects human cooperative behaviour in indefinitely repeated Prisoner's Dilemma games. We conduct a laboratory experiment (n = 126) with two between-subjects treatments varying whether human participants chat with an AI chatbot (GPT-5.2) before every round or only before the first round of each supergame, and benchmark against human-human data from Dvorak and Fehrler (2024) (n = 108). We find four main results. First, cooperation against the AI is high and initially comparable to human-human levels, but unlike in the human-human setting, where cooperation converges to near-complete levels, cooperation against the AI plateaus and never reaches full cooperation. Second, repeated communication, which substantially increases cooperation in human-human interactions, has no detectable effect in the human-AI setting. Third, strategy estimation reveals that human-AI subjects favour Grim Trigger under pre-play communication and remain dispersed under repeated communication, whereas human-human subjects converge to Tit-for-Tat and unconditional cooperation respectively. Fourth, human-AI conversations contain more explicit strategy commitments but fewer emotional and social messages. These results suggest that humans cooperate with AI at high rates but do not develop the trust observed in human-human interactions. Cooperation in the human-AI setting is sustained through conditional rules rather than through the social bonds and mutual understanding that characterise human-human cooperation.