Many contemporary implementations of artificial intelligence are marked by their opacity. This opacity represents both a challenge and opportunity to programmers and philosophers alike; as opaque technologies, while difficult to comprehensively model and understand on a “human” level, allow machines to achieve novel results in areas such as natural language processing, creativity, vision, and more that were historically regarded as beyond the capability of non-human agents. This shift in paradigm is due to artificial intelligence’s status as an engineering problem in the world of computer science – computer scientists seek to achieve these results without questioning whether such results should be regarded as out of reach for machines in the first place. Many, but by no means all, philosophers have long thought that certain tasks deeply linked to the mechanism of consciousness – such as semantic value acquisition and alignment – are impossible goals for inorganic “thinkers”. I argue that the paradigm shift that has occurred in the engineering of artificial intelligence forces us to reconsider this scepticism. Here, the case study of GPT-3 – a natural language processor underpinned by opaque AI methods – is used to illustrate the point. While we cannot regard GPT-3 as having the ability to communicate in the same way that humans do, its underlying architecture and flexibility with respect to different natural language-based scenarios represent an opportunity to reassess phenomenologically-informed misgivings concerning the possibility of machine consciousness. Furthermore, I use the example of GPT-3 to bring in ideas from contemporary discussions in philosophy of mind and assess their compatibility with Husserlian principles, as Husserl’s phenomenology informs the direction of my doctoral research project. While Husserl’s edict of eschewing “speculative synthesis” in the quest for understanding consciousness must be heeded at all times, I argue that the recent paradigm shift in the developing of AI technologies requires us to reconsider the limits of phenomenological analysis. Emerging AI technologies present us with the opportunity to do so; as our analysis is strengthened when we consider results from modern nonreductive accounts of consciousness. If we are to comprehensively tackle the question of artificial consciousness there is much to gain in doing so from a Husserlian-phenomenological position, and this position is only enhanced with careful consideration of current issues in philosophy of mind.