We study privacy in federated learning systems where the model is partitioned into global and local components, the latter of which are personalized for each participating client and never shared. This setting suggests a new type of privacy breach: that the server might learn a client’s local model from updates to the global model. Using on-device recommendation as a motivating example, we show that this can in fact happen in various communication protocols, even when the client obscures its update messages with noise. These findings raise new questions and open problems about privacy in an emerging application of federated learning.