Quantcast
Channel: Ubuntu Forums - Installation & Upgrades

SHA signatures on archive.ubuntu.com are empty

$
0
0
The content of Packages.gz and Packages.xz is empty in http://archive.ubuntu.com/ubuntu/ubu.../binary-amd64/

This seems to have happened somewhat recently (these files used to be ~1.3MB+), and prevents me from being able to confirm the signatures of packages after download.

Is this a known issue? I'm not where to look, or even where to report this (I'm guessing this isn't even the right forum, but I can't find an appropriate one).

Thanks,
Jeff

[ubuntu] Can't dd over an image onto a newer PC with NVME drive

$
0
0
I have an old image for an old product that would be almost impossible to recreate at this point from scratch so just starting over with a new ubuntu install and installing all the custom stuff isn't an option for me. The good news is that I have been able to, in the past, just boot an empty computer with a ubuntu live usbstick with the dd image added to it and simply dd over the image onto the new drive on the new PC and it boots.

The image is based on ubuntu server 16.04. It is new enough to include uefi grub on it. The new computer doesn't have legacy boot mode but it does have secureboot which I turned off. When the computer boots into UEFI, I can see the ubuntu (NVMe) option to select so it at least sees it there. When I click on it to say boot from this I get a black screen with a non-blinking cursor in the upper left. I need to make sure I'm clear that I am not seeing the Grub boot screen where you can select which kernel or whatever to boot, I'm talking about the UEFI configuration page (pressing esc when turning the computer on) which allows me to see boot options available.

I've tried a boot-repair thumb drive but it fails repairing (grub purge failed - or something like that). I can boot the ubuntu 18.04 live thumbdrive again and I see the partitions on the NVMe:

Code:

Device            Start      End  Sectors  Size Type
/dev/nvme0n1p1      2048  1050623  1048576  512M EFI System
/dev/nvme0n1p2  1050624 226213887 225163264 107.4G Linux filesystem
/dev/nvme0n1p3 226213888 234440703  8226816  3.9G Linux swap

If I mount p1 and look, I see the EFI folder with ubuntu inside with its grub files. P2 shows the usual linux file structure. All looks well.

I even tried updated the /etc/fstab in p2 and /EFI/ubuntu/grub.cfg in p1 to point to the new UUID values (something I haven't had to do before) but no change. Can anyone recommend anything else I can try? Thanks!

issue in autoinstall Ubuntu server 22.04 using raid 1 configuration

$
0
0
I tried to install Ubuntu server 22.04 (even with the latest daily version 24.04) using the autoinstall process, to get a bootable raid 1 configuration, using two 2TB m2 disks.
The process keeps crashing at the point where it needs to install grub on the EFI partition.
I have tried different configurations for creating the raid, using both disks and partitions as devices, but to no avail, it fails to make the raid bootable.
So the question for you guys is:, how can I set up my storage to have a bootable raid 1 configuration? I guess someone has already positively addressed this problem...
The storage section I am using, with all my attempts, is as follows:

Code:

#cloud-config
autoinstall:
  version: 1
storage:
    grub:
      reorder_uefi: false
    config:
    # Partition table of two disks
    # - { type: disk, ptable: gpt, path: /dev/nvme0n1, wipe: superblock, preserve: false, name: '', grub_device: true, id: disk-nvme0n1 }
    # - { type: disk, ptable: gpt, path: /dev/nvme1n1, wipe: superblock, preserve: false, name: '', grub_device: true, id: disk-nvme1n1 }
    - { type: disk, ptable: gpt, path: /dev/nvme0n1, wipe: superblock, preserve: false, name: '', id: disk-nvme0n1 }
    - { type: disk, ptable: gpt, path: /dev/nvme1n1, wipe: superblock, preserve: false, name: '', id: disk-nvme1n1 }
    - { type: partition, device: disk-nvme0n1, size: 1024MB, wipe: superblock, flag: 'boot',partition_type: EF00, preserve: false,  grub_device: true, id: part-grub }
    #- { type: partition, device: disk-nvme0n1, size: 32GB, wipe: superblock, flag: 'swap', preserve: false, id: part-swap }
    - { type: partition, device: disk-nvme0n1, size: 256GB, wipe: superblock, preserve: false,  id: part-home }
    - { type: partition, device: disk-nvme0n1, size: -1, wipe: superblock, preserve: false,  id: part-data }
    #
    - { type: partition, device: disk-nvme1n1, size: 1024MB, wipe: superblock, preserve: false,  grub_device: false, id: part2-grub }
    #- { type: partition, device: disk-nvme1n1, size: 32GB, wipe: superblock, flag: 'swap', preserve: false, id: part2-swap }
    - { type: partition, device: disk-nvme1n1, size: 256GB, wipe: superblock, preserve: false,  id: part2-home }
    - { type: partition, device: disk-nvme1n1, size: -1, wipe: superblock, preserve: false,  id: part2-data }
    #
    #- { type: raid, name: md0, raidlevel: 1, devices: [ part-grub, part2-grub ], spare_devices: [], preserve: false, wipe: superblock-recursive, ptable: gpt, id: disk-raid-grub }
    #- { type: raid, name: md1, raidlevel: 1, devices: [ part-swap, part2-swap ], spare_devices: [], preserve: false, wipe: superblock-recursive, ptable: gpt, id: disk-raid-swap }
    - { type: raid, name: md2, raidlevel: 1, devices: [ part-home, part2-home ], spare_devices: [], preserve: false, wipe: superblock-recursive, ptable: gpt, id: disk-raid-home }
    - { type: raid, name: md3, raidlevel: 1, devices: [ part-data, part2-data ], spare_devices: [], preserve: false, wipe: superblock-recursive, ptable: gpt, id: disk-raid-data }
    #- { type: raid, name: md0, raidlevel: 1, devices: [ disk-nvme0n1, disk-nvme1n1 ], spare_devices: [], preserve: false, wipe: superblock-recursive, ptable: gpt, id: disk-raid }
    #- { type: partition, device: disk-raid, flag: bios_grub, id: part-grub, number: 1, preserve: false,  size: 1MB }
    # - { type: partition, device: disk-raid, size: 1024MB, wipe: superblock, flag: 'bios_grub', preserve: false,  grub_device: true, id: part-grub }
    #- { type: partition, device: disk-raid, size: 1024MB, wipe: superblock, flag: 'boot',partition_type: EF00, preserve: false,  grub_device: true, id: part-grub }
    #- { type: partition, device: disk-raid, size: 32GB, wipe: superblock, flag: 'swap', preserve: false, id: part-swap }
    #- { type: partition, device: disk-raid, size: 256GB, wipe: superblock, preserve: false,  id: part-home }
    #- { type: partition, device: disk-raid, size: -1, wipe: superblock, preserve: false,  id: part-data }
    #- { type: partition, device: disk-raid-grub, size: 1024MB, wipe: superblock, flag: 'boot',partition_type: EF00, preserve: false,  grub_device: true, id: part-raid-grub }
    #- { type: partition, device: disk-raid-grub, size: 1024MB, wipe: superblock, flag: 'boot', preserve: false,  grub_device: true, id: part-raid-grub }
    #- { type: partition, device: disk-raid-swap, size: 32GB, wipe: superblock, flag: 'swap', preserve: false, id: part-raid-swap }
    - { type: partition, device: disk-raid-home, size: 256GB, wipe: superblock, preserve: false,  id: part-raid-home }
    - { type: partition, device: disk-raid-data, size: -1, wipe: superblock, preserve: false,  id: part-raid-data }
    #
    - { type: format, volume: part-grub, fstype: fat32, preserve: false, id: format-grub }
    # { type: format, volume: part2-grub, fstype: fat32, preserve: false, id: format2-grub }
    #- { type: format, volume: part-raid-swap, fstype: swap, preserve: false, id: format-swap }
    - { type: format, volume: part-raid-home, fstype: btrfs, preserve: false, id: format-home }
    - { type: format, volume: part-raid-data, fstype: btrfs, preserve: false, id: format-data }
    - { type: mount, path: '/boot/efi', device: format-grub, id: mount-grub }
    #- { type: mount, path: '/boot/efi', device: format2-grub, id: mount2-grub }
    #- { type: mount, path: '', device: format-swap, id: mount-swap }
    - { type: mount, path: '/', device: format-home, id: mount-home }
    - { type: mount, path: '/data', device: format-data, id: mount-data }

    # - { type: raid, name: md1, raidlevel: 1, devices: [ part-swap, part2-swap ], spare_devices: [], preserve: false, wipe: superblock, ptable: gpt, id: raid-swap }
    # - { type: partition, device: raid-swap, size: 32GB, wipe: superblock, flag: 'swap', number: 2, preserve: false, id: part-raid-swap }
    # - { type: format, volume: part-raid-swap, fstype: swap, preserve:false, id:format-swap }
    # - { ptable: gpt, path: /dev/nvme1n1, wipe: superblock-recursive, preserve: false, name: '', grub_device: true, type: disk, id: disk-nvme1n1 }
    # - { device: disk-nvme1n1, flag: bios_grub, id: partition-1, number: 1, preserve: false, size: 1048576, type: partition }
    # - { device: disk-nvme1n1, size: 1024M, wipe: superblock, flag: boot, number: 2, preserve: false, grub_device: true, type: partition, id: partition-3 }
    # - { device: disk-nvme1n1, size: 32G, wipe: superblock, flag: '', number: 3, preserve: false, grub_device: false, type: partition, id: partition-5 }
    # - { device: disk-nvme1n1, size: 256G, wipe: superblock, flag: '', number: 4, preserve: false, grub_device: false, type: partition, id: partition-7 }
    #- { name: md0, raidlevel: raid1, devices: [ partition-2, partition-3 ], spare_devices: [], preserve: false, wipe: superblock, ptable: gpt, type: raid, id: raid-0 }
    #- { device: raid-0, size: 1024M, wipe: superblock, flag: 'boot', number: 1, preserve: false, grub_device: true, type: partition, id: partition-8 }
    #- { fstype: fat32, volume: partition-8, preserve: false, type: format, id: format-3 }
   
    # - { name: md1, raidlevel: raid1, devices: [ partition-4, partition-5 ], spare_devices: [], preserve: false, wipe: superblock, ptable: gpt, type: raid, id: raid-1 }
    # - { device: raid-1, size: 32G, wipe: superblock, flag: swap, number: 2, preserve: false, grub_device: false, type: format, id: partition-9}
    # - { fstype: swap, volume: partition-9, preserve: false, type: format, id: format-4 }
    # - { type: mount, path: '', device: format-4, id: mount-3 }
    # - { name: md2, raidlevel: raid1, devices: [ partition-6, partition-7 ], spare_devices: [], preserve: false, wipe: superblock, ptable: gpt, type: raid, id: raid-2 }
    # - { device: raid-2, size: 256G, wipe: superblock, flag: '', number: 3, preserve: false, grub_device: false, type: partition, id: partition-10 }
    # - { fstype: btrfs, volume: partition-10, preserve: false, type: format, id: format-5 }
    # - { path: /, device: format-5, type: mount, id: mount-4 }

24.04 installer, you cannot create a EFI boot partition?

$
0
0
Just trying out the new 24.04 LTS server, using the latest installer (per 2024-04-17)
But I find it very strange I cannot create an EFI partition!
This is mandatory to have a bootable system.

Attached Images

[ubuntu] Ubuntu Jammy (in WSL): adding armhf arch, can't find packages

$
0
0
I'm trying to use WSL to cross-compile a c++ program for armhf on a Windows PC.
I've was able to do it on a Debian image some years ago, and now i need to do it again on a new PC. Since the powershell command wsl.exe --install automagically installed Ubuntu Jammy, i'm fine at keeping it.


I've learned that armhf binaries are on the "ports" repository, so i added these lines to a /etc/apt/sources.list.d/armrep.list file:


Code:

deb [ arch=armhf ] http://ports.ubuntu.com/ jammy main restricted universe multiverse
deb [ arch=armhf ] http://ports.ubuntu.com/ jammy-updates main restricted universe multiverse
deb [ arch=armhf ] http://ports.ubuntu.com/ jammy-security main restricted universe multiverse
deb [ arch=armhf ] http://ports.ubuntu.com/ jammy-backports main restricted universe multiverse

Then I add the armhf architecture (sudo dpkg --add-architecture armhf) but i get a many errors like these ones:
Code:

E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy/main/binary-armhf/Packages  404  Not Found [IP: 91.189.91.83 80]
E: Failed to fetch http://security.ubuntu.com/ubuntu/dists/jammy-security/main/binary-armhf/Packages  404  Not Found [IP: 91.189.91.82 80]
E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy-updates/main/binary-armhf/Packages  404  Not Found [IP: 91.189.91.83 80]
E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/jammy-backports/main/binary-armhf/Packages  404  Not Found [IP: 91.189.91.83 80]
E: Some index files failed to download. They have been ignored, or old ones used instead.

Then, when i try to install the gtk3 libraries (sudo apt install libgtk-3-dev:armhf) i get a whole lot of unmet dependencies errors.

Could someone tell me what should i do to build for armhf using the Ubuntu Jammy image available from the Windows Store for WSL?

install MPLABX on github ci/cd libusb-1.0.so not found

$
0
0
I'm trying to install MPLABX on a github CI/CD pipeline. i get the message libusb-1.0.so, even though libusb-dev has been installed.

Here's the script:
Code:

name: Revomax
run-name: ${{ github.actor }} is building Revomax
on: [push]
jobs:
  tutorial:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm install -g bats
      - run: bats -v
  tutorial2:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout repo
      uses: actions/checkout@v2
      with:
        submodules: 'recursive'
    - name: Install dependencies
      run: |
        sudo apt-get update && \
        sudo apt-get install -y libc6 libx11-6 libxext6 libstdc++6 libexpat1 libusb-dev && \
        sudo apt-get clean && \
        sudo apt-get autoremove && \
        sudo rm -rf /var/lib/apt/lists/*
    - name: Install XC8
      run: |
        wget https://ww1.microchip.com/downloads/aemDocuments/documents/DEV/ProductDocuments/SoftwareTools/xc8-v2.36-full-install-linux-x64-installer.run &&
        chmod +x xc8-v2.36-full-install-linux-x64-installer.run && \
        sudo ./xc8-v2.36-full-install-linux-x64-installer.run --mode unattended --unattendedmodeui none --netservername localhost --prefix "/opt/microchip-mplabxc8-bin/"
    - name: Install MPLABX
      run: |
        wget -U "Mozilla" https://www.microchip.com/bin/download?f=aHR0cHM6Ly93dzEubWljcm9jaGlwLmNvbS9kb3dubG9hZHMvYWVtRG9jdW1lbnRzL2RvY3VtZW50cy9ERVYvUHJvZHVjdERvY3VtZW50cy9Tb2Z0d2FyZVRvb2xzL01QTEFCWC12Ni4xMC1saW51eC1pbnN0YWxsZXIudGFy -O MPLABX-v6.10-linux-installer.tar &&
        tar -xf MPLABX-v6.10-linux-installer.tar &&
        sudo ./MPLABX-v6.10-linux-installer.sh --nox11 -- --unattendedmodeui none --mode unattended --ipe 0 --collectInfo 0 --installdir /opt/mplabx
    - name: Output results
      run: |
        ls -l

Here's the output:
Code:

$ act
[Revomax/tutorial ] 🚀  Start image=catthehacker/ubuntu:act-latest
[Revomax/tutorial2] 🚀  Start image=catthehacker/ubuntu:act-latest
[Revomax/tutorial ]  🐳  docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true
[Revomax/tutorial2]  🐳  docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true
[Revomax/tutorial ]  🐳  docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[]
[Revomax/tutorial ]  🐳  docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[]
[Revomax/tutorial ]  ☁  git clone 'https://github.com/actions/setup-node' # ref=v4
[Revomax/tutorial2]  🐳  docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[]
[Revomax/tutorial2]  🐳  docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[]
[Revomax/tutorial2] ⭐ Run Main Checkout repo
[Revomax/tutorial2]  🐳  docker cp src=/home/werk/git-werkmap/OptiClimateRevolution_FW/. dst=/home/werk/git-werkmap/OptiClimateRevolution_FW
[Revomax/tutorial2]  ✅  Success - Main Checkout repo
[Revomax/tutorial2] ⭐ Run Main Install dependencies
[Revomax/tutorial2]  🐳  docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/1] user= workdir=
Get:1 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease [3632 B]
Get:2 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu jammy InRelease [23.8 kB]
Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]     
Get:4 http://archive.ubuntu.com/ubuntu jammy InRelease [270 kB]               
Get:6 https://packages.microsoft.com/ubuntu/22.04/prod jammy/main amd64 Packages [141 kB]
Get:7 https://packages.microsoft.com/ubuntu/22.04/prod jammy/main all Packages [1035 B]
Get:8 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]       
Get:9 https://ppa.launchpadcontent.net/git-core/ppa/ubuntu jammy/main amd64 Packages [2969 B]
Get:10 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [109 kB]   
Get:11 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [1082 kB]
Get:5 https://packagecloud.io/github/git-lfs/ubuntu jammy InRelease [28.0 kB] 
Get:12 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages [1792 kB]   
Get:13 https://packagecloud.io/github/git-lfs/ubuntu jammy/main amd64 Packages [1842 B]
[Revomax/tutorial ] 🏁  Job succeeded
Get:14 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [1695 kB]
Get:15 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [2135 kB]
Get:16 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [44.7 kB]
Get:17 http://archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [17.5 MB]
Get:18 http://archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [266 kB]
Get:19 http://archive.ubuntu.com/ubuntu jammy/restricted amd64 Packages [164 kB]
Get:20 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1375 kB]
Get:21 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [2242 kB]
Get:22 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [51.1 kB]
Get:23 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1975 kB]
Get:24 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [33.3 kB]
Get:25 http://archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [80.9 kB]
Fetched 31.2 MB in 10s (3009 kB/s)                                           
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
| libxext6 is already the newest version (2:1.3.4-1build1).
| libxext6 set to manually installed.
| libc6 is already the newest version (2.35-0ubuntu3.6).
| libstdc++6 is already the newest version (12.3.0-1ubuntu1~22.04).
| libx11-6 is already the newest version (2:1.7.5-1ubuntu0.3).
| libx11-6 set to manually installed.
| The following additional packages will be installed:
|  libexpat1-dev libusb-0.1-4
| The following NEW packages will be installed:
|  libusb-0.1-4 libusb-dev
| The following packages will be upgraded:
|  libexpat1 libexpat1-dev
| 2 upgraded, 2 newly installed, 0 to remove and 25 not upgraded.
| Need to get 288 kB of archives.
| After this operation, 299 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libexpat1-dev amd64 2.4.7-1ubuntu0.3 [147 kB]
Get:2 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libexpat1 amd64 2.4.7-1ubuntu0.3 [91.0 kB]
Get:3 http://archive.ubuntu.com/ubuntu jammy/main amd64 libusb-0.1-4 amd64 2:0.1.12-32build3 [17.7 kB]
Get:4 http://archive.ubuntu.com/ubuntu jammy/main amd64 libusb-dev amd64 2:0.1.12-32build3 [32.0 kB]
Fetched 288 kB in 1s (441 kB/s)   
| debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 26642 files and directories currently installed.)
| Preparing to unpack .../libexpat1-dev_2.4.7-1ubuntu0.3_amd64.deb ...
| Unpacking libexpat1-dev:amd64 (2.4.7-1ubuntu0.3) over (2.4.7-1ubuntu0.2) ...
| Preparing to unpack .../libexpat1_2.4.7-1ubuntu0.3_amd64.deb ...
| Unpacking libexpat1:amd64 (2.4.7-1ubuntu0.3) over (2.4.7-1ubuntu0.2) ...
| Selecting previously unselected package libusb-0.1-4:amd64.
| Preparing to unpack .../libusb-0.1-4_2%3a0.1.12-32build3_amd64.deb ...
| Unpacking libusb-0.1-4:amd64 (2:0.1.12-32build3) ...
| Selecting previously unselected package libusb-dev.
| Preparing to unpack .../libusb-dev_2%3a0.1.12-32build3_amd64.deb ...
| Unpacking libusb-dev (2:0.1.12-32build3) ...
| Setting up libexpat1:amd64 (2.4.7-1ubuntu0.3) ...
| Setting up libusb-0.1-4:amd64 (2:0.1.12-32build3) ...
| Setting up libexpat1-dev:amd64 (2.4.7-1ubuntu0.3) ...
| Setting up libusb-dev (2:0.1.12-32build3) ...
| Processing triggers for libc-bin (2.35-0ubuntu3.6) ...
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
| 0 upgraded, 0 newly installed, 0 to remove and 25 not upgraded.
[Revomax/tutorial2]  ✅  Success - Main Install dependencies
[Revomax/tutorial2] ⭐ Run Main Install XC8
[Revomax/tutorial2]  🐳  docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/2] user= workdir=
| --2024-04-18 10:53:19--  https://ww1.microchip.com/downloads/aemDocuments/documents/DEV/ProductDocuments/SoftwareTools/xc8-v2.36-full-install-linux-x64-installer.run
| Resolving ww1.microchip.com (ww1.microchip.com)... 23.209.233.96
| Connecting to ww1.microchip.com (ww1.microchip.com)|23.209.233.96|:443... connected.
| HTTP request sent, awaiting response... 200 OK
| Length: 71543287 (68M) [application/x-sh]
| Saving to: ‘xc8-v2.36-full-install-linux-x64-installer.run’
|
xc8-v2.36-full-inst 100%[===================>]  68.23M  3.17MB/s    in 21s   
|
| 2024-04-18 10:53:40 (3.33 MB/s) - ‘xc8-v2.36-full-install-linux-x64-installer.run’ saved [71543287/71543287]
|
[Revomax/tutorial2]  ✅  Success - Main Install XC8
[Revomax/tutorial2] ⭐ Run Main Install MPLABX
[Revomax/tutorial2]  🐳  docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/3] user= workdir=
| --2024-04-18 10:54:05--  https://www.microchip.com/bin/download?f=aHR0cHM6Ly93dzEubWljcm9jaGlwLmNvbS9kb3dubG9hZHMvYWVtRG9jdW1lbnRzL2RvY3VtZW50cy9ERVYvUHJvZHVjdERvY3VtZW50cy9Tb2Z0d2FyZVRvb2xzL01QTEFCWC12Ni4xMC1saW51eC1pbnN0YWxsZXIudGFy
| Resolving www.microchip.com (www.microchip.com)... 173.223.115.160, 2a02:26f0:c900:69c::b18, 2a02:26f0:c900:69e::b18
| Connecting to www.microchip.com (www.microchip.com)|173.223.115.160|:443... connected.
| HTTP request sent, awaiting response... 302 Moved Temporarily
| Location: https://ww1.microchip.com/downloads/aemDocuments/documents/DEV/ProductDocuments/SoftwareTools/MPLABX-v6.10-linux-installer.tar [following]
| --2024-04-18 10:54:06--  https://ww1.microchip.com/downloads/aemDocuments/documents/DEV/ProductDocuments/SoftwareTools/MPLABX-v6.10-linux-installer.tar
| Resolving ww1.microchip.com (ww1.microchip.com)... 23.209.233.96
| Connecting to ww1.microchip.com (ww1.microchip.com)|23.209.233.96|:443... connected.
| HTTP request sent, awaiting response... 200 OK
| Length: 999229440 (953M) [application/x-tar]
| Saving to: ‘MPLABX-v6.10-linux-installer.tar’
|
MPLABX-v6.10-linux- 100%[===================>] 952.94M  2.94MB/s    in 4m 54s 
|
| 2024-04-18 10:59:00 (3.24 MB/s) - ‘MPLABX-v6.10-linux-installer.tar’ saved [999229440/999229440]
|
| 64-bit Linux detected.
| Check for 64-bit libraries
| These 64-bit libraries were not found and are needed for MPLAB X to run:
| libusb-1.0.so
|
| For more information visit http://microchip.wikidot.com/install:mplabx-lin64
|
[Revomax/tutorial2]  ❌  Failure - Main Install MPLABX
[Revomax/tutorial2] exitcode '1': failure
[Revomax/tutorial2] 🏁  Job failed
Error: The runs.using key in action.yml must be one of: [composite docker node12 node16], got node20

[ubuntu] How to install and use an older GCC version for armhf?

$
0
0
I'm running an Ubuntu 22.04.3 LTS image in WSL, as i need to cross-compile a program for an ARM linux pc.

I've been able to install the arm architecture, the arm gcc and compile. Problem is that GCC is version 11 and the glibc is too new for the ARM linux.
So i need to downgrade to GCC10.

I've tried to follow this guide: https://tech.sadaalomma.com/ubuntu/h...ion-in-ubuntu/ so i've removed both gcc and arm-linux-gnueabihf-gcc and then I've sudo apt install gcc-10-arm-linux-gnueabihf and it installed
Is it the right one?
I'm unable to call it .. what's the command, or the path, to check its version like gcc -v?.
Should i install something like binutils-arm-linux-gnueabihf but for GCC10? how?

Could someone guide me?

[ubuntu] Ubuntu install on Dell M6700 with M5000M Nvidia using 18.04, 20.04.6, 22.04 - UGH

$
0
0
Hi,
Short story - Trying to leave Windows behind which I only need for my Solidworks,
Luckily I use Clonezilla Live CD regularly for bare metal backups so I had that close to hand. Aiming to put Ubuntu on 2nd drive.
I dedicate this post to the demise of three DVDs and 18 hours of time.

My system, Fully loaded Dell M6700, Optimus enabled, triple hard drives, was UEFI boot but now using BIOS multiboot - Win 10 on 1st drive, Win 11 on 2nd drive.

Attempt #1 - Try out 22.04 as a Live install - total failure, fails to launch live and just hangs.

Attempt #2 - Started with DVD install of 22.04 - total failure - 99.9% of the time the screen is black, occasionally a mouse cursor appears for 30 seconds or so but the hard drive and DVD drive just go bananas with a dead system.

Attempt #3 - Started with a DVD I have of 18.04 - excellent, installs perfectly no issue once I got past a bug (use of incorrect keyboard layout for initial username\password entry) that prevented me logging on using a # character. Decided to do an online upgrade to 20.04 - perfect no issue. Decided to do further online upgrade to 22.04 - failed - same result as the 22.04 DVD install with a black screen doing ziltch.

Attempt #4 - Grubbed into the 22.04 recovery with safe graphics. Then using the terminal tried the following I found in the internet -
sudo apt purge ~nvidia
sudo apt autoremove
sudo apt clean
sudo apt update
sudo apt full-upgrade
sudo reboot

Success! I ended up with a dual boot Win 10 and Ubuntu 22.04 - So Clonezilla'd everything before further experimenting.

Experiment #1 Created DVD of 20.04 - DVD install appeared to be working from the start showing the Ubuntu screen (unlike 22.04) but right at the end of the install process 20.04 totally destroy my boot partition, taking out absolutely everything including my Win10 installation - Not good :o( thank god for Clonezilla!
Restored my 1st drive containin the Win10 and boot partition - system still dead!
Restored 2nd drive with backup of 22.04 that I installed earlier - woohoo I'm back up and running.

Experiment #2 Installed Nvidia other driver for M5000M - seems to be working fine.

In summary, 18.04 is the only DVD I can install without anything breaking. 20.04 seems to destroy booting somehow not yet determined, and 22.04 just doesn't work at all with the graphics subsystem during install.

So for now I'm good, with Clonezilla backups of everything just in case :o)
I hope this is of help for other Dell users.

How to boot recovery mode from grub rescue?

$
0
0
I've installed Ubuntu on an old tablet, but it doesn't currently boot automatically and instead I end up at Grub Rescue.
I can run the following:
Code:

set root=(hd0,gpt2)
linux /boot/vmlinuz root=/dev/disk/by-id/mmc-BGND3R_0x2f804915-part2
initrd /boot/initrd.img
boot

And it will boot, but I just get a plain blue screen with a couple of icons top right. I can't open terminal via keyboard shortcut. I did manage to connect to the wifi but it seems ssh is disabled so I can't get in that way.
From reading other threads it seems the advice is to boot into recovery mode. But I have no idea how to do this?

[ubuntu] Can't Install 22.04LTS on my PC's 2nd NVMe 2T chip

$
0
0
Hi All:

I installed a new blank (unformatted) 2TB NVMe chip in my notebook PC's second (of 2) available slots ("nvmes1"). My intention was to download the 22.04LTS iso and install it on this chip, where I can afterwards selectively boot to it (I have a long ago installed win10 instance on the first NVMe chip "nvmes0").

I downloaded and installed (on Win10) the recommended "balenaEtcher" iso writer package. After running, it looked like I successfully got this utility to write an installation config to the nvmes1 device - I had a p1 (uncited format type) with 5 GB in it, a p2 FAT partition with 5 MB on it; a p3 seemingly of "unformatted" type of 300kb size, and the remainder of the device, p4 unformatted with 1.8TB available.

When I booted to the chip - the Ubuntu graphical installation screen successfully came up. It gave me the duo option of either running a live demo of Ubuntu (to try it out) or installing Ubuntu. I chose to install Ubuntu. So I stepped through the various options... selecting my US local keyboards, choosing a regular (not minimal) installation, but with all additional options (auto update plus something else) NOT selected. I choose to direct the installation to that large empty unformatted p4 area on the chip. I choose the ext4 type partition.

The installation process started... and soon a status screen appeared with a status message "Detecting file systems". Below this was a small terminal window (minimally expandable) that displayed a log of installation status messages scrolling by.

During my first try at this... my patience was 2 hours. After two hours seeing that one same status message and a series of similar terminal messages that may have been in a loop, I force rebooted the machine (back to my Win10 instance). When I inspected the chip, nothing had been written to the p4 partition.

At midnight before going to sleep, I repeated my earlier installation routine... kicking off the process the same way, again directing the installation to this large empty p4 partition. The next morning, about 9 hours later, I turned my screen on and discovered that the status message was exactly the same "Detecting file systems" and the installation log terminal window again appeared to be looping a series of the same messages (starting and stopping anaconda, starting and stopping zsystemd.service, couldn't delete a esoterically named file because it couldn't be found)... so I force exited/rebooted the PC.

Once back into my stable Win10 environment, I inspected the device partition info and discovered that the p4 was successfully formatted to EXT4, and contained 31GB of data.

So it took 9 hours to (download?) write 31GB of installation data to my speedy NVMe device.

This is my work day machine. I don't have the time to give it days to do this installation. Apparently there's something going wrong with the installation. Since the installation did not complete successfully, I' m figuring that if I try it again, it'll just start me at the beginning of the process all over again.

So I'm not able to install Ubuntu LTS 22.04 onto my NVMe chip. Any ideas?

ACPI Bios Error after dual boot setup

$
0
0
I create a usb install with Rufus to dual boot with Windows 11. I then installed it on my machine. It seemed to go fine. When I restart I choose Ubuntu and then I get this error. https://postimg.cc/Hc2jmhhf
I did disable fast startup. How do I go about trouble shooting this? Below are is my system info. Thank you!



Attached Images

[ubuntu] Why doesn't Ubuntu Update SNAPs during reboot

$
0
0
I have found it necessary every time it is suggested that I reboot to issue the command "sudo snap refresh". If I do not do that then the various snaps will not be up to date. Since like many users I spend much of my day accessing services through a browser the updates to the Firefox and Chrome browsers will be held up,

Why does the Ubuntu reboot process not automatically invoke snap refresh while there are guaranteed to be no conflicts with running processes, and the reboot has root authority?

[ubuntu] Dual Booting Problem between Ubuntu and openSUSE

$
0
0
I am trying to set up a dual boot situation with Ubuntu and openSUSE (for redundancy.) I have done this previously but this is the first time I have tried this with UEFI enabled.After many tries I did a clean installation with the EFI partition reformatted. I installed openSUSE first and Ubuntu second. The BIOS shows GRUB entries for both installations. Both grub entries will boot the system that was used to create them. The Ubuntu system, being installed second, had entries for both Ubuntu and openSUSE, but only the Ubuntu system will boot. The error message was "Bad Shim Signature."The openSUSE forums suggested enrolling the openSUSE signing certificate as a MOK and suggested that I use the command,mokutil --import /mountpoint/etc/uefi/certs/*-shim.crt where mountpoint was the openSUSE root partition; i.e.; /etc/opensuse on my system. They said I would see a "blue screen" and I could then enroll the key on the next boot.I did this This time I got the message; “Bad Shim signature”/ “You need to load the kernel first.” The next boot did not bring up any screens allowing me to "Enroll the key."At that point, they refereed me to the Ubuntu forums for your wisdom. They talked about mm64.efi and MokManager but gave no other direction. Can you tell me how to proceed or point me in the right direction?

runing script as new user during cloud-init

$
0
0
I am new to use cloud-init to automate the installation of Ubuntu22.04. I am now about to install the OS by using the cloud-config below
Code:

#cloud-config
autoinstall:
  version: 1
  apt:
    primary:
    - arches: [default]
      uri: http://freenas.boot.pxe:9081/repository/apt-proxy/
  user-data:
    timezone: Asia/Shanghai
    disable_root: true
  identity:
    hostname: ubuntu
    password: $6$FhcddHFVZ7ABA4Gi$r9wyPRFCtp1fL28aj6C9G4GIePSfqqUDVJAq7.qAtookMeYJCY6NH7TF19NZHk.x0ign7Y7Xgh3JEGcXVfozH1
    username: oai
  keyboard: {layout: us, variant: ''}
  locale: en_US.UTF-8
  ssh:
    install-server: true
  packages:
    - htop
    - iftop
    - curl
    - git
    - gnupg-agent
  storage:
    layout:
      name: direct
  late-commands:
    - curtin in-target --target=/target -- wget -O /usr/local/bin/post_script.sh http://freenas.boot.pxe:8080/PXE/Preseed/post_script.sh
    - curtin in-target --target=/target -- chmod +x /usr/local/bin/post_script.sh
    - curtin in-target --target=/target -- /bin/bash /usr/local/bin/post_script.sh

During the installing, a new user named "oai" is created and post configuration script "post_script.sh" is downloaded to the target and executed. However, when executing the post_script, it cannot see the newly created user "oai". A simple script below will throw an exception says user "oai" is not exist.
Code:

#!/bin/bash

mkdir -p /home/oai
chown oai:oai /home/oai

It seems that the user "oai" is not yet created when running late-commands. It is the right place the put the post configuration script in late-command? Is there any other way to set the owner of a file/path to the newly created user rather than root during the cloud-init process.

Thanks.

[ubuntu] ubuntu server 22.04.4 "apt upgrade: breaks boot

$
0
0
First off I'm a newbe. I just got back from out of town for a week and went through "apt update" then "apt upgrade." I was(am) on 22.04.4 LTS @ 5.15.0-102-generic. After running "apt upgrade" the system fails on boot. I've looked at grub a bit but I really don't have any experience. One error I do see is Invalid Magic Number, you need to load the kernel first. In the options in the grub menu after the boot fails if I choose the earlier version (102) and not 105 it will boot. Where should I go from here? What should I look for? I would expect others are seeing this. This is a clean install on a dedicated machine, no dual-boot. I restore my HDD image of 22.04.4 102 and it boots fine. Thanks in advance for your feedback.





Latest Images