After toolkit installation, one just need to write simple C program
which calls cudaHostAlloc() to allocate sufficiently large piece of
pinned memory, and then dump all non-zero content to HDD for later
examination. One can do this on a timely fashion gathering more and
As an example, our simplest "proof of concept" program was able to
catch large fragments of files (or even entire files) being written or
read by other users - emails, documents, various system logs, inputs,
outputs, etc... - virtually everything one can imagine to find in the
OS file cache and in the released memory of other programs.
It seems the bug was here from the first days of CUDA. It does not
exist under Windows (my Microsoft's contacts pointed me out that
Windows forcibly zeroes any memory exported to user space so most
likely this is the reason why it is not here), and I have not checked
OSX as of yet.
However, every owner of Nvidia graphics card running Linux and Nvidia
graphics drivers should now consider switching to Nouveau driver, or
even to GPUs of other vendors (or maybe to Windows OS :) ).
As to Nvidia's response, it was really strange - their point
was that if application does not like its data to be visible by
other programs/users via this security issue, it should explicitly
clear data in memory before releasing it. This is really strange and
absolutely wrong idea, moreover, the contents of OS file cache,
unused physical memory etc... cannot be cleared from within user
programs at all.
More globally, this is a question of how trustworthy are all
third-party proprietary drivers which are capable to expose
memory into user space.
On Thu Jan 6 '11 5:02pm, Alex Granovsky wrote
>>P.S. Working on the Linux port of our MP4/CUDA code initially
>>developed a year ago for Windows version, we unexpectedly found
>>that (unlike Windowd CUDA implementation), cudaHostAlloc/cuMemHostAlloc
>>CUDA API calls return non-initialized pinned memory.
>>Depending on how exactly this pinned memory is allocated by CUDA
>>runtime/CUDA driver, this may be the serious system-wide security
>>hole potentially allowing one to examine regions of memory previously
>>used by other programs and Linux kernel itself. We are now in contact
>>with NVidia trying to clarify as much details on this problem as
>>possible. Meanwhile, we'd recommend everybody to stop running CUDA
>>drivers on any multiuser Linux system.
>After some more tests, we can confirm this is indeed the
>very serious security hole. E.g, we were able to examine contents
>of pages evicted from Linux file cache using this hole.
For reference purposes, here is the info on CUDA driver version: devdriver_3.2_linux_64_260.19.26.run and Linux version: OpenSuSE 11.3 x64, uname -a: Linux phen 2.6.34-12-desktop #1 SMP PREEMPT 2010-06-29 02:39:08 +0200 x86_64 x86_64 x86_64 GNU/Linux