The dataset first described in the "Stanford 3D Objects" section of the paper Disentangling by Subspace Diffusion. The data consists of 100,000 renderings of each of the Bunny and Dragon objects from the Stanford 3D Scanning Repository. More objects may be added in the future, but only the Bunny and Dragon are used in the paper. Each object is rendered with a uniformly sampled illumination from a point on the 2-sphere and a uniformly sampled 3D rotation. The true latent states are provided as NumPy arrays along with the images. The lighting is given as a 3-vector with the unit norm, while the rotation is provided both as a quaternion and a 3x3 orthogonal matrix. There are many similarities between S3O4D and existing ML benchmark datasets like NORB, 3D Chairs, 3D Shapes and many others, which also include renderings of a set of objects under different pose and illumination conditions. However, none of these existing datasets includes the full manifold of rotations in 3D.