Bug#818280: upgrade-reports: System is unusable after "aptitute safe-upgrade"

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Bug#818280: upgrade-reports: System is unusable after "aptitute safe-upgrade"

Gilles Sadowski-2
Package: upgrade-reports
Severity: critical
Justification: breaks the whole system

Dear Maintainer,

   * What led up to the situation?

I first ran
 # aptitude safe-upgrade
which completed successfully.

["aptitude" automatically selected packages related to "console".]

I then installed the linux kernel 4.4 (using the available deb package from "unstable"),
followed by an upgrade of "firmware-linux" and "firmware-linux-nonfree".

Note: the "initramfs" operation was performed by "dracut" due to the package "initramfs"
crashing when called by grub (broken behaviour started about two weeks ago IIRC).

I then issued the command
 # reboot

The reboot/halt message appeared on the conseole, but the system did not reboot.
It was possible login and reissue the same command, to the same (non-)effect.

I powered off the machine and restarted it.

The boot sequence seemed normal until file system checks, with a message from systemd:
"a job is running" (or something like that).

After the 1min30s timeout I was then presented with the maintenance prompt.
Trying to continue (^D) led to this error message:

---CUT---
Error setting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)
---CUT---

With
 # journalctl -xb

I could spot problems being reported (lines prefixed with a string containing "systemd"):
Failed to start Conseole System Startup Logging
console-kit-log-system-start.service: Unit entered failed state
[...]
Failed to start Create Volatile Files and Directories
[...]
emergency.service: Failed at step EXEC spawning /bin/plymouth: No such file or directory

   * What exactly did you do (or not do) that was effective (or
     ineffective)?

Upon rebooting, I chose the option "sysinit" (rather than the default grub entry).
This led to a flood of "Failed" during the boot sequence, seemingly caused by a read-only root filesystem.

Rebooted again, trying other entries (older kernels: 4.3 and 4.2).
All to no avail: same behaviour (either dropping to maintenance, or RO root).

   * What outcome did you expect instead?

I certainly did not expect to get an unusable system.

All this is on an installation where I *tried* to avoid "systemd": package "systemd-shim"
was installed.
But it seems that "systemd" is still forcefully installed (at least partially, since I
never asked that grub creates entries for "systemd" but they exist nevertheless).

I never had so many problems with Debian until this package was suddenly present on my
machine, without any warning that the init system was going to be completely different
and no way to opt out before the damage (as I would see later) was done.


Is there any option to get out of this situation, short of reinstalling the system?


Regards,
Gilles


-- System Information:
Debian Release: stretch/sid
Architecture: amd64 (x86_64)

Kernel: Linux 4.2.0-1-amd64 (SMP w/8 CPU cores)
Locale: LANG=C.UTF-8, LC_CTYPE=C.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: sysvinit (via /sbin/init)

Reply | Threaded
Open this post in threaded view
|

Bug#818280: upgrade-reports: System is unusable after "aptitute safe-upgrade" [SOLVED]

Gilles Sadowski-2
Hi.

It turned out that the problem was related to LVM partitions not being
"available" while "systemd" was trying to run "fsck" on them.

In maintenance mode, I just ran

 # vgchange -a y theVolumeGroupName

and they were automatically detected, and mounted.
Then the boot sequence could proceed normally.

Why couldn't "systemd" figure out the problem?


Regards,
Gilles