At the most conservative estimates, ATLAS will produce over 1 Pb of data per year requiring 1-2M SpecInt95 of CPU to process and analyse, and to generate large Monte Carlo datasets. The collaboration is worldwide, and only Grids will allow all collaborators must have access to the full datasets. Atlas must develop an intercontinental distributed computing and data Grid with a user interface to shield the user from the Grid middleware and the distributed nature of the processing; we must develop automated production systems using the Grid tools; and we must provide tools that automatically distribute, install and verify the required experimental software and run-time environment to remote sites to avoid the problems of chaotic and multi-site management. Bookkeeping, replication and monitoring are also required. All of these topics are being addressed within the collaboration, with Grid tools being used for large-scale Data Challenges.