Archive for December, 2008

Today I went over and had to enable the testing tree of debian.

This was actualy very easy, just adding the correct repository to the sources.list in /etc/ap. One little trouble with this was aptitude crushing with not enough memory. This i solved by add APT::Cache-Limit:26777216 to the apt config. I did this by (dirty ) add the line to the /etc/apt/apt.conf.d/70debconf file (i tried make a new file in the conf directory, but apt ignored my file).

Next was to run aptitude and update my packages. This took quiet some time since a big load of packages got updated, and i had to resolve some conflicts manualy. Actualy i just had to remove a obsolete package that hold a lot of other packages back.

As i was doing maintenance anyway i checked out uname -a to see what my kernel is. Big surprise, even if i had installed about 5 more recent kernes on my Harddisk, it seems i never actualy loaded those! So i headed over to /boot and linked vmlinuz and initrd.img to the most recent ones, run lilo and rebooted.

This was where all went bad. Somehow it didnt installed my lilo, and my system refused to boot. After about 10 reboots into the hetzner rescue system later, i finaly managed to install a working version of the kernel.But finaly i run a 2.26 linux kernel now.

One more problem was to install newest version of VMware. I needed to export a old gcc (4.1 instead 4.3) and ignore complaints about minor version differences. Some time in the future i need try run this shitty legacy server only running on windows with wine again, but guess what..their debian repository just went down today so im out of luck with this (Why don’t those big Software Vendors at least publish their old non-continued software under some open-to-use licence? so at least we could make it run on modern systems like linux,bsd instead need run emulations? Fuck you Adobe!).

Long talk, short outcome..all is back and running now. Yay me!

Flattr this!

Comments Comments Off on upgrade hell

At work, we use the very nice and easy to use DBAN tool to wipe Harddisks before give them out of our reach. This ensures our Users Data are save from restoring by bad people.

However, with the recent growing of Hard-disk sizes up to 1TB this became somewhat hard to do.

Usually we use the DoD-short algorithm, since it provides fair cost/revenue ratio. A 40GB Hard-disk can be wiped in about 8-10h without trouble. Usually i start it near end of day and when i come back next Morning, its done wiping.

Now, Today i have to wipe a Hard-disk from a User concerned about security (A User concerned about security? Actually a very good thing.) So i thought i wont use the DoD-Short but the standard DoD algorithm. Guess how long it takes to wipe those 80GB…. 50 hours.

TGiF, so i can go home and it will be done when i come back next week.

This made me think about 2 things:

  1. I think the DoD standard should be used always. If the US Government doesn’t trust DoD-short algorithm fully, why should we trust it? So if possible always use the standard. But that actually means 2-3 times the time we need now.
  2. This was a 80GB Hard-disk. Today’s HDs are up to 1TB.

Conclusion: Soon Administrators will face the Problem of securely erase much bigger Hard-disks than today. In addition, i don’t think it will be harder to restore Data from Hard-disks than it is now (more likely it will be easier due to improvements in technology). If such a wiping takes more than 1 week, it becomes troubling work with it. In the end I can only see one useful solution: shredding Hard-disk into pieces. Sure this isn’t good for the environment.

The problemis, that we get bigger HDs and better technology, making security growing more painful. A great debacle. I think we will soon see more of those “mistakenly sold HD with Data on it on ebay”-News.

Flattr this!

Comments Comments Off on Wiping Harddisks

Seit ich meinen Tor Server betreibe kriege ich viel mehr emails als sonst.

Hauptsächlich von Rechtsverdrehern, äh ich meine Rechtsvertretern der Content-Industrie. Leider reicht es nicht denen einmal zu erklären, dass man einen Tor-Exit-Node betreibt, die schreiben immer wieder.

Bisher hatte ich Kontakt mit 2 Organisationen:

Entertainment Software Association
575 7th Street, NW, Suite 300
Washington, DC 20004 USA

BayTSP, Inc.
PO Box 1314
Los Gatos, CA 95031

BayTSP ist dabei noch “human”, man kriegt einen Link, geht auf eine Website, klickt auf “mistake” schreibt “Tor” in das Kommentarfeld und schickt das ganze ab. Ganz einfach.
ESA ist nervig. Da muss ich nämlich jedesmal eine Vollständige mail schreiben.

Interessant finde ich das angehängte XML, hier ein Beispiel:

<?xml version=”1.0″ encoding=”iso-8859-1″?>

<Infringement xmlns:xsi=”” xsi:noNamespaceSchemaLocation=””>
<Entity>BayTSP, Inc. on behalf of Paramount Pictures</Entity>
<Contact>Compliance Manager, Compliance Team</Contact>
<Address>P.O. Box 1314, Los Gatos, California 95031 United States of America</Address>
<Phone>(408) 341-2300,(408) 341-2399</Phone>
<Entity>Hetzner Online AG</Entity>
<Title>Tropic Thunder</Title>

Weiss irgendjemand ob es auch ein “Antwort-Shema” giebt? ich hab gerade keines gefunden. Da ich aber 2-3 solche Mails pro Monat erhalte und eigentlich immer das gleiche Antworte, wäre ein Automatisierter Prozess richtig nett.

Flattr this!

Comments Comments Off on Viel Post