Poster
Boundary Probing for Input Privacy Protection When Using LMM Services
Xiaofei Hui · Haoxuan Qu · Ping Hu · Hossein Rahmani · Jun Liu
Alongside the rapid development of Large Multimodel Models (LMMs) like GPT-4V, privacy concerns also rise. As LMMs are commonly deployed as cloud services, users are typically required to upload their personal images and videos to the cloud to access these services, raising great concerns about visual privacy leakage. In this paper, we investigate the critical but underexplored problem of keeping LMM's good performance while protecting visual privacy information in the input data. We tackle this problem in the practical scenario where the LMM remains a black box, i.e., we can only access its input and output without knowing the LMM's internal information. To address such a challenging problem, we propose a new Privacy-Aware Boundary Probing (PABP) framework, which, from a novel perspective, converts this problem into a privacy optimization problem guided by the decision boundary between the "satisfactory" and "unsatisfactory" LMM utility states. We propose two tailored schemes, Gradually-Expanding-Probing (GEP) and Prior-Guided-Probing (PGP), to maintain satisfactory LMM performance while achieving privacy protection. We show the effectiveness of our framework on different benchmarks (code will be released).
Live content is unavailable. Log in and register to view live content