Edge computing promises lower processing latencies and better privacy control than cloud computing as edge devices are positioned closer to users. Realizing this promise depends on building strong theoretical and engineering foundations of computing based on an edge continuum connecting edge to other resources. In the SPEC-RG Cloud Group, we conducted a systematic study of computing models leveraging the edge continuum and found that these models have many shared characteristics. Despite these commonalities, no systematic model or architecture for the edge continuum currently exists. In this paper, we address this need by proposing a reference architecture for the edge continuum and map to it a diverse set of state-of-the-art resource managers from different computing models, providing strong evidence of the generality of this architecture. Additionally, we demonstrate the utility of the architecture by designing a deployment and benchmarking framework for edge continuum applications and investigating the performance of individual components of the architecture. To enhance the performance analysis capabilities of the benchmark, we introduce an analytical first-order performance model that can be used to explore multiple application deployment scenarios such as local processing on endpoints or offloading between cloud or edge. The deployment and benchmarking framework is open-source and available at https://github.com/atlarge-research/continuum.