Re: [Tails-dev] from sdmem to memtest, and testing procedur…

Delete this message

Reply to this message
Author: anonym
Date:  
To: The Tails public development discussion list
Subject: Re: [Tails-dev] from sdmem to memtest, and testing procedures [Was: Testing Tails 0.9~rc1]
12/22/2011 11:46 PM, intrigeri:
>
> So what to do? Do we consider the move implemented by this branch as
> a fix for a serious bug, that would be worth a freeze exception?


Not sure yet, see below for a problem I encountered.

> The diffstat is pretty small (a few file renames make it appear bigger
> than it really is), the changes are self-contained, and I would be
> delighted to see us use the memtest kernel feature ASAP after having
> had it enabled in Debian (yeah, this last one does not count), so my
> answer would be a clear "yes" iff. the following happens quickly:
>
>     experimental results show the branch actually fix the bug
>   AND
>     at least two people other than me try it (preferably on bare
>     metal) and confirm it seems to be working as well as for them the
>     sdmem-based solution we shipped in Tails 0.9.


I tested this on an 64-bit laptop (so baremetal) with 4GB of RAM
(although 512MB is borrowed by the graphics card). I couldn't run 32
instances of fillram (or at least it was frozen for ~30 mins before I
gave up), so I just ran one instance which was appropriately oom-killed.

For tests 2 and 3 (release_process/test/erase_memory_on_shutdown) I
rebooted to a 64-bit lenny (so CONFIG_STRICT_DEVMEM is disabled), and
got hundreds of millions of hits in *both* tests :( (although test 3 had
distinctively less hits than test 2). Not good.

I made a short script which printed the number of hits per 10MB memory
segment [1], and 0-10 had 64511 hits, 10-800 had 0, but 800+ were either
completely full of hits, or at least had >0 hits. I didn't look at the
memory consumption of Tails before I rebooted to do test 3, so I'm
unsure of how much of the 10-800 blocks that were cleared and how much
was allocated and hence not touchable by fillram, but I doubt all of it
were allocated (a freshly booted Tails doesn't use *that* much, right?).

Also, when I rebooted Tails nothing indicated that the memory wipe
failed. The screen got funky colours, which I've seen happen on previous
successful wipes.

It seems something is amiss, so I'm not sure this should be merged
before we know what's going on (could be that something went wrong in my
test, and I can't try again because now I won't have access to that
computer for a day or so).

[1] This is the script I used (beware! written down from memory):

# set $MEMSIZE to amount of memory in megabytes
blocks=10
for X in $(seq 0 $[$MEMSIZE/10]); do
  start=$[$X*$blocks]
  stop=$[$[$X+1]*$blocks]
  echo -n "Range $start to $stop: "
  dd if=/dev/mem count=$blocks skip=$start bs=1M 2> /dev/null | \
    grep -c wipe_didnt_work
done