User:Tonyr/Journal/Feb07

From FreekiWiki
< User:Tonyr‎ | Journal(Redirected from TonysMacJournal Feb07)
Jump to navigation Jump to search

3feb07

The flakey G3 B&W tower problem turned out to be a bad system board, solved by replacement. The bad board went into the cannibalized system, which was tagged and put back on the Mac Pile. The SCSI G3 B&W tower was bundled with a 17" Apple Studio display, speakers, and an external floppy and sent to the store with a price of $100. As it happened, a customer in the store bought it right away. I talked a little about pricing earlier (6jan07). It seems to me that there is a general price guide shaping up. I would suggest that there is a standard base configuration for a slot loader iMac, and a standard base configuration for a tower, each with a standard price. Then there would be a standard schedule for price variations based on differences in HD type/size, memory size, CD/DVD type, processor speed, and bundle configuration. We seem to be accumulating 17" (CRT) Apple Studio Displays, which go on ebay for about $20. Since Ubuntu installation detects monitor type and configures xorg acccordingly, a question arises what the guidelines should be for monitor configuration during Ubuntu installation on towers. The information about xorg/monitor assumptions should be published for customers.

Ubuntu installation in MacRebuild should be via network, as it is in PC build. That means that the MacBuild area should have network cabling for however many build seats there will be in the new MacRebuild area. The build servers will need to have Ubuntu-ppc support, and the process for ppc netboot will need to be documented. I'm working on that.

6feb07

Network install
I haven't been able to establish that network install on Ubuntu 7.04 works for powerpc. That is not to say that it doesn't, I just haven't been able to discover whether 1) it is possible at all, and 2) if it is possible, just how in the heck it is spoze to work. The 7.04 netboot files fail to do ramdisk (initrd) loading, and I haven't looked at dapper yet (the netboot files appear to be the same set in 6.10dapper an 7.04feisty). Feisty may not be a good example since it is still in alpha. Using the boot files from the feisty alternate cd as a starting place, configuring a tftp server and using booting an iMac using enet: in OpenFirmware actually gets the install started, but it gets confused when there is no CD in the drive. The yaboot.conf file has a comment that says something like "This is for CD install only, and should not be used as a general example", so I'm not really surprised that it doesn't work. There is another approach using package tftfd-hpa instead of tftpd, and having the host be a server for dhcp, tftp and http, and access the alternate cd files via http. Maybe I'll give that a shot.

hda/hdc device assignment swap on B&W G3
There is a known (in the Ubuntu community) issue about storage devices being discovered in a different order on B&W G3 powermacs. The main symptom is that the CD-ROM device is assigned as hda, and the hard drive is assigned as hdc. This is documented at the Ubuntu bug site, Launchpad, as bug number 7256. The problem is related to the order in which modules are loaded at install time. The solution is to modify the installed initrd image. I'm not sure that is something that we want to do, since it requires some advanced linux admin/hacking knowledge, is non-standard with respect to build process, and would re-appear if a customer were to try to re-install Ubuntu. It also seems to affect the ability of OpenFirmware to boot the CD from the keyboard with the C key, although I'm not convinced at this point that this is a related phenomenon. (There are some comments in related bugs at Launchpad that indicate that sometimes the USB mouse and keyboard do not appear to be working.) I'm more inclined to reject these from MacBuild.

8feb07

hda/hdc device assignment swap on B&W G3
There is a second B&W G3 tower now on the bench, that also assigns the CD as hda and the hard drive as hdc, but this one boots the CD just fine. So what is the CD boot problem with the other one (6jan07)? Meanwhile the suspect tower is being used as a HD wipe and memory test station.

Duplicating Ubuntu/PPC hard drives
There was a discussion today (and yesterday) in the Mac corner about duplicating Ubuntu hard drives for the Macs instead of doing a CD based installation every time. Martin and Jeff talked about setting up a network install via DHCP/TFTP. Dave and Matteo talked about duplicating a standard installation hard drive. Today Matteo and Loren were working on a manual process for doing that. Here is my take on that process.

The source hd cannot be the booted drive, because several many files will be open and active. A rescue CD booted from the CD drive will do. That way both the source and target hard drives can be passively mounted. The source and target drives are almost guaranteed to be different (mfg and/or size), so a straight cloning is out of the question. That means either creating the partition table manually, or using the alternate install CD in (in expert mode?) to partition the hard drive.

The powerpc install process creates four partitions:

  1. a partition table partition
  2. a boot partition
  3. a Linux partition (for the OS)
  4. a swap partition

The size of the partition table and the size of the boot partition are the same no matter what the size of the hard drive. The swap partition size is based on the size of installed memory. For a standard memory configuration the size of the swap partition will always be the same. The size of the linux partition can be calculated as

  • DiskSize - (PartitionTableSize + BootPartitionSize + SwapPartitionSize + 1)

The extra 1 accounts for the first sector on the drive (block 0) not being used. It is probably easiest to let the alternate install CD do the partitioning. If the duplication process is going to be automated, the Linux and Swap partition sizes will need to be calculated.

Matteo rightly pointed out that if the swap partition were created before the linux native partition, then the calculation step would not be needed.

Manual creation of the partitions can be accomplished with fdisk, which should be available on any rescue CD. The ppc version of fdisk has specific commands for creating the partition table and boot partition. The Linux and Swap partitions are both created as standard linux partitions.

Once the partitions are created on the target drive, the duplication process is fairly straightforward:

  1. create an ext3 filesystem on the Linux partition using mke2fs -J
  2. create a mount directory for the linux partition of target drive in the /mnt directory; a mount directory for the source drive should already exist.
  3. mount the source and target linux partitions on the appropriate mount points in /mnt
  4. use 'dd' to copy the boot partition directly
  5. use rsync to copy the installed linux partition to the target drive linux partition

There is one possible problem with this duplication process that must be corrected by hand. On most Macs the hard drive is assigned as device hda and the CD drive is assigned as hdc; on some, particularly B&W G3 towers, the assignment is reversed. This assignment is used in /etc/fstab to define mount points. The fstab file on the duplicated hard drive must match the actual device assignments on the machine into which it is placed. Either different source drives must be created, one for each of the possible device assignment combinations, or a duplicated hard drive may need to have its /etc/fstab file modified to match the target machine.

9feb07

Here are the steps we used today to replicate a powerpc linux-installed hard drive to a wiped hard drive in a G3 B&W tower. Note that in this exercise, the hard drives are recognized as hdc(master) and hdd(slave).

  1. jumper the source drive as master; jumper the target drive as slave.
  2. boot from finnix CD
  3. create the partitions on the target disk using fdisk /dev/hdd
    1. i command to create/initialize that partition map
    2. C command to create a boot partition: start=64, length=1954, type=Apple_Bootstrap, name=untitled
      • (yes, capital C)
    3. p command to see freespace (unused disk blocks)
    4. c command to create a linux partition: start=2018, length=(freespace - 1494848), name=untitled
    5. c command to create a linux swap partition: start=2018+linuxPartitionLength), length=1494848, name=swap
    6. w command to write the partition table to the hard drive
    7. q command to quit fdisk
  4. create an ext3 file system on the linux partition
    • mkfs.ext3 /dev/hdd3
  5. create a mount directory hdd3 in /mnt (directory hdc3 should already exist there)
    • mkdir /mnt/hdd3
  6. mount the source and target linux partitions
    • mount -t ext3 /dev/hdc3 /mnt/hdc3
    • mount -t ext3 /dev/hdd3 /mnt/hdd3
  7. copy the boot partition using dd (see 10feb07)
    • dd if=/dev/hdc2 /dev/hdd2
  8. copy the linux partition using rsync; note that the trailing '/' is required for proper results.
    • rsync -av /mnt/hdc3/ /mnt/hdd3/


There are some issues concerning the process as described here.

  • The boot partition is copied with dd instead of rsync. I does appear posible to use rsync if the boot partition is created as an hfs filesystem. It is NOT clear at this time that the fdisk command to create the partition provides the file system as well, and the hfs utilities are not included on the finnix CD. We could create a modified finnix CD with this utilities added, if necessary. Using rsync would then require the explicit creation of an hfs filesystem on hdd2, the creation of a /mnt/hdd2 directory on which to mount the partition, anth rsync step to copy /mnt/hdc2 to /mnt/hdd2. (see 10feb07)
  • The replicated drive boots, but in a strange way. It seems to not find the boot file at first, showing the folder-with-questionmark icon for a few seconds before booting the yaboot image. I'm not sure why that is. OpenFirmware has an environment variable boot-device whose value is a space separated list of boot file candidates. The default value uses an hfs file type specifier, \\:tbxi. I think the file type is in the files resource fork, which makes me think that the resource fork exists in the partition created by the Ubuntu installation process, but not in the copy created with rsync. I could be wrong. I tried several things when the folder-with-questionmark first appeared,
  • I rebooted into OpenFirmware and selected the bootfile explicitly as with boot hd:2,\\yaboot, meaning boot using the file named yaboot on the second partition of the default hard drive. That worked right away. **I added that boot file specifier to the boot-device environment variable. That seemed to work, too. That made me start to think that maybe the first time I saw the boot failure, it really was a boot failure. The added boot specifier was in the second position in the list, and the boot failure icon behavior is consistent with trying the first boot specifier and failing before the second specifier is used. This needs to be tried with the new boot file specifier in the first position.
  • If the resource fork files are actually in the boot partition, then the dd method of copying the partition should get them. I tried that, too, and the boot failure icon still appeared, but the boot eventually succeeded, as before. This also could be tested with the file order modification in the environment variable. (see 10feb07)
  • The calculation of the linux partition size could be avoided if the order of the partitions is changed by swapping the linux partition and the swap partition. The numbers needed for partition start can be read right off the partition creation information. This invalidates the information in the yaboot.conf file in the boot partition, which has partition information created by the Ubuntu installation process. Swapping the partition order means that the linux boot images wiil be on partition 4 (hdd4), but the yaboot.conf file says that they are in partition 3 (hdd3), so the boot would fail. It is possible to edit the file and make the appropriate changes. So the question is which is a better solution, calculating the partition start and length numbers, or editting the yaboot.conf file. In addition, the file /etgc/fstab is created believing that the root device is partition 3. It would need to be edited also to reflect the new partitions.
  • Some Mac computers, as mentioned earlier, designate the hard drive as hda and the CD rom as hdb. The towers that we are using to develop this process designate the CD as hda and the first hard drive as hdc.

This means that either there will need to be separate master hard drives for each device configuration, or the fstab file will need to be edited to reflect the partition changes.


Afterthoughts
If the order of the devices in linux is determined by driver loading order at boot time, then the order (hda vs hdc) assigned by finnix may be different from the order assigned by Ubuntu. This needs to be checked and verified as so or not-so.

Michael (tech support) told us a couple of days ago that he wasn't prepared to support Macs, let alone Macs with Rdgy installed, and asked if he could route technical support questions to us. That's fine. Maybe we chould give him a Mac with edgy installed for the tech support area. I sent a note to the reuse mailing list to that effect.

10feb07

HD replication
Replicating the boot partition apparently requires the dd method. I tried it both ways (rsync and dd), and only the dd method made the boot work correctly.

Creating a boot partition with type Apple_Bootstrap apparently DOES create the hfs filesystem on the partition. I verified it by creating a boot partition on a wiped drive and mounting the boot partition with type hfs. It makes no difference to the process, however, since the resource fork issue requires the use of dd to copy the partition anyway.

I can write a script that generates the lengths of the start and length values of the linux and swap partitions and puts them in a script that can be fed to fdisk. Here is, generally, is what such an fdisk script would look like, without the comments:

i                # initialize the partition map
                 # blank response to init disk size query
p                # print empty partition table
C                # (capital C) create a (boot) partition using types
64               # start block
1954             # length
untitled         # name
Apple_Bootstrap  # boot partition type
p                # print modified partition table
c                # create normal (linux native) partition
<number>         # start block
<number>         # partition length
untitled         # name
p                # print modified partition table
c                # create swap partition
<number>         # start block
<number>         # length
swap             # name
p                # print final partition table
w                # write partition map to disk
y                # confirm write to disk
q                # quit

The <number> things will be calculated based on the disk size and put in the shell script that generates the fdisk script.

PROBLEM: Ubuntu is mounting by UUID in /etc/fstab these days. The UUID of the replicated drive will not match the UUID in /etc/fstab after replication. So fstab will have to be modified by the replication process to use the new UUID for the root and swap partitions.

The UUIDs are found in directory /dev/disk/by-uuid as links to the referenced partition.


Other stuff
We had a first inquiry today about helping with Mac rebuild from Gary with son Michael. I asked him to talk to Dave or Matteo.

12feb07

smartmon
Dave suggested looking into a utility called smartmon to reduce the time required for wiping hard drives and forcing reallocation of bad sectors. The Ubuntu package is actually smartmontools, its homepage is smartmontools.sourceforge.net, and a writeup about using it to deal with bad blocks can be found here. The package is not normally installed with Ubuntu (I didn't see it in my Edgy installation). I haven't looked at the LiveCDs yet (x86 or powerpc), but it is on the finnix powerpc rescue CD that I have been using. A dedicated station with an Ubuntu installation would need to have the smartmontools package installed.


13feb07

Today I asked the question "What is the wipe and test process for hard drives?", trying to find out what I should be looking at for doing wipe/test in on the Mac side. It should be noted that there is a side goal of trying to reduce hard drive wipe and test time in general. My starting point was my infant process of using smartctl -t <test> to identify bad blocks, and then using dd to write zeros to the entire hard drive.

One person answered that badblocks was the current solution. One person thought writing zeros and then ones to the whole drive would be sufficient to wipe the drive. I explained to Martin what I was doing, and he asked if smartctl was doing a destructive test (write-then-read).

From what I understand about smartctl, there is no mode in which it does a destructive test. Its purpose seems to be to monitor read failures on a drive, with the goal of predicting disk failure. The idea is that once blocks start to go bad, other block failures will surely follow as the night follows the day (or vice versa if you prefer). There is a suggested method for reallocating blocks to replace the failing blocks, but only as a stop-gap while the failing disk is backed up and a replacement can be installed.

I found out some other things. The in-place wipe/test policy is that the discovery of ANY bad blocks on a hard drive requires that the drive be rejected. All that smartctl can do is report that there were bad blocks found. My reading of the smartmontools documentation leads me to believe that a smartctl extended self-test is required to exhaustively search for a read failure on the drive. Such a test could take an hour or more. The information I don't have is what the disk-size vs time graph looks like for badblocks. Writing zeros to a 20GB hard drive took about 20 minutes on a 350MHz G3 PowerMac. Twp passes would take about forty minutes. Four passes would take about 1hr 20min. If I took a wild guess and said that a smartctl/raw-write solution takes 2hr 20min and a badblocks solution takes three hours, the savings is not enough to warrant the change in methods. I'd have to see what happens for larger drives.

I noticed that the numbers for the partition block lengths didn't add up to the total block size of the hard drive in my partitioning process. My assumption that the block numbering started at 1 (start of partition map) is wrong: numbering starts at zero. I'll go back and modify the description of the arithmetic. (Done 14feb07)

15feb07

Auto-restart problem
Loren marked one of the iMacs in the warehouse as automatically restarting after shutdown. I googled for references to this problem, and found a couple. This Apple Developer Article acknowledges the problem and indicates that USB devices may be involved. The writer of this MacWorld Forum thread found that the USB keyboard/mouse combo appeared to be the cause of his problem. This suggests a line if investigation:

  • try hard powerdown with just the power button (power button may be stuck/defective)
  • try unplugging the keyboard right after software shutdown
  • try mouse and keyboard on separate USB ports, and try other keyboard/mouse with this arrangement
  • replace USB controller on the system board (I'm not certain that his is possible on an iMac), or...
  • replace the system board

Mac build scripts
I created a Mac Build Scripts page to hold the Mac disk replication scripts I am working on. I guess any other scripts related to Mac Rebuild will get put here, too.

16feb07

Rejected one G3/333MHz Powerbook with a bad keyboard. Started eval on a G4/800MHz Powerbook Titanium. Specs I have seen indicate that there should be an Airport card in it, but I haven't verified that yet. The owner could have removed it. There is a disassembly/fixit guide here.

Pricing research for the G4 Powerbook at ebay:

CPU     MEM     HD	DPY	RMV		$		date
400MHz 	256M	10GB	15	DVD/CDRW	530+29(559)	15feb07
500MHz 	256M	30GB	15	DVD/CDRW	500+33(533)	16feb07
550MHz 	640M	40GB	15	DVD/CDRW	545+37(581)	16feb07 refurb
867MHz 	256M	40GB	15	DVD/CDRW	550+30(580)	14feb07
867MHz 	256M	40GB	15	DVD/CDRW	439+29(469)	14feb07 refurb
550MHz 	256M	40GB	15	DVD/CDRW	478+96(574)	14feb07
550MHz 	256M	20GB	15	DVD/CDRW	625+29(654)	13feb07
800MHz 	256M	30GB	15	DVD/CDRW	480+35(515)	14feb07
400MHz 	384M	20GB	15	DVD/CDRW	450+28(478)	13feb07
1500MHz	512M	60GB	15	DVD/CDRW	775+30(805)	13feb07
1000MHz	512M	60GB	15	DVD/CDRW	630+29(659)	13feb07
500MHz 	512M	80GB	15	DVD/CDRW	450+25(475)	12feb07
867MHz 	256M	40GB	15	DVD/CDRW	499+29(528)	11feb07
1000MHz	512M	60GB	15	DVD/CDRW	658+95(743)	11feb07
800MHz 	512M	80GB	15	DVD/CDRW	421+15(436)	11feb07 problems
400MHz 	384M	10GB	15	DVD/CDRW	580+30(610)	11feb07
867MHz 	256M	40GB	15	DVD/CDRW	439+29(469)	11feb07 refurb, no wifi
500MHz 	1000M	20GB	15	DVD/CDRW	710+17(727)	14feb07
867MHz 	1000M	80GB	15	DVD/CDRW	649+35(684)	09feb07
867MHz 	256M	40GB	15	DVD/CDRW	490+40(530)	09feb07 no wifi

An Airport card (original, not Extreme) at ebay has a price range of about $50-$80.

Memory Testing
There is a memtest program for PPC machines, and it is included on the finnix CD. It does not test all of installed memory, however, and I haven't figured out exactly what's going on there. It runs in a booted system ( I haven't seen a boot standalone version yet), so it obviously doesn't do the 'copy myself around so that I can test the whole physical memory space' thing. It offers options to select memory ranges, so one strategy would be to leave a tested memory stick in a test system all the time, and add sticks to be tested in slots that represent physicla memory addresses that are available for testing. I assume that this is possible to do since I see the memory-stick/address assignments in the /memory device reported by OpenFirmware. I also assume that a faster processor will test memory faster, so a higher end machine would be a good candidate for testing.

Needless to say, this is partially a moot point because standard PC-100 and PC-133 is used in most of the Macs we are working with, and is tested in Advanced testing already.

17feb07

G4 Powerbook
The airport card is there. The memory is 512MB (2x256 SODIMM PC-133). I tried to run memtest in place from finnix. The test got through step 11 and seemed to hang at step 12. On reboot the video was hosed. It is possible that memtest wrote to places it shouldn't have. It is also possible that overheating caused problems (the machine had been on for ~24 hours. I repeated Jeff's exercise, which was to reseat the display connector, and I reseated the memory, too. After that the video problem went away...finnix booted, and the alternate Ubuntu install CD booted. This sounds like a repetition of the sequence of events that brought the laptop top to FreeGeek in the first place. On the other hand, the alternate install process completed successfully without interruption.

The memory still needs a complete memtest. Jeff suggested that laptops (black hole) might be able to provide an x86 unit for testing SODIMMs with a bootable memtest86 image. The battery needs a time eval.

The current build sheet calls for a PRAM battery check. Getting at the PRAM battery is not difficult, but neither is it trivial. It requires removing the DVD drive, a few small pieces and the tape holding in the battery. There is a step-by-step guide here


Network Install (see 6feb07)
The newest finnix release, 89.0, is reported to have a netboot feature. I haven't played with that yet. It's still an option I guess, but the HD replication approach is probably better. I think I mentioned this before: There should be a separate HD for each possible basic arrangement of device assignments. It is also possible, but a little riskier, to modify the relevant files. I know about fstab and yaboot.conf. Are there others?

19feb07

Thinking about a dedicated machine for Ubuntu ppc disk replication.

  • It must support at least two IDE drives.
  • It should support at least one SCSI drive.
  • A copy of the entire linux installation can reside on the boot drive, in a directory and/or in its own partition.
  • The boot drive should have a separate partition with an image of a generic boot partition

Putting the linux installation image in its own partition offers the opportunity to offer it as a boot alternative. That would in turn allow the image to be updated regularly from the network, reducing the update time associated with the Mac Build process.

A generic boot partition would preserve the original boot partition contents. The real boot partition on the boot drive will have specialized content if there is more than one boot image. I'm not sure that is so important, since the only file in the boot partition that would change is yaboot.conf, and that will most likely need to be changed anyway.

It would still be necessary to maintain unique versions of yaboot.conf for machines with different hda/hdb/hdc assignments. Note that the ppc-linux command mkofboot will do the right thing to create a boot partition on a target drive. I will need to experiment with this some to verify that it does, indeed, do the right thing.

/etc/fstab modifications are:

  • substitute the correct UUIDs for the linux and swap partitions. A partition UUID can be discovered with tune2fs -l <partition-spec> | grep -i uuid"
  • substitute the correct device for the cdrom, if there is one.

Here's a couple of trailing thoughts. Most of the Macs we've been working on have ATI video controllers. Some of the newer ones have nVidia controllers. The /etc/X11/xorg.conf file would need to be modified accordingly, or such machines would need to have a regular alternate-CD-install applied. There are also SCSI based systems. We built one a couple of weeks ago that had two SCSI drives in it. That kind of configuration might need a custom install, also.

20feb07

G4 Powerbook
It has a loose-connection/power problem at least. The power connector changes from green to amber just by applying pressureto the case above the connection point, or sometimes just goes out if the wire is wiggled. The battery may be bad, too. The power connector turned green after about an hour of charging , indicating a full charge, but on power-up the battery reported a power level of 3%, and the box immediately went to sleep. After two more hours of charging the same thing happened.

The display occasionally has what looks to be a sync failure, with ghost images and wrapped scan lines. Maybe this guy was dropped at some point and there were connection/cable ramifications.

The Powerbook goes to the lockup at night and overweekends (or when nobody is in the Mac Build area for an extended period).

New G4 tower
450MHz, 20G HD + 80G HD, 1GB Ram (51+2*256+128+128), AGP video, PCI video, (yeah, two of 'em), SCSI controller. The AGP video card seems to be dead, no initial signal to the monitor. The PCI video card is alive and works. Through some sequence of events and reasoning that I cannot now completely recall, I connected a hard drive out of an iMac (where HD is device hda and CD is device hdc) to the ide0 cable. This is actually part of an exercise to figure out what needs to be done to a replicated ppc/linux drive when installing it in another machine. This configuration did not get past stage1 boot at power up. Booting into OF (Cmd+Option+O+F), however, allowed a direct boot command (boot hd:,\\yaboot) to succeed. Direct modification of yaboot.conf in the boot partition (hda2) did not affect the power-up boot failure, but the recommended procedure (modify /etc/yaboot.conf and run ybin) did fix the boot problem. The modification was to the device parameter assignment. In short, the value of that parameter has to match the machine that the hard drive is installed in.

While the power-up boot worked and got to the login screen, the actual login failed to start the the X system (xserver and gnome). I tried reconfiguring the xserver, with the command dpkg-reconfigure xserver-xorg, but that didn't work either. Part of that problem was that the xserver-xorg package was reportedly not completely installed. It turned out that a lot of things were not completely installed, as the command dpkg --configure -a showed. This leads me to believe that the hard drive I used may not have had as complete an installation as I had assumed. Reconfiguring xserver-xorg again didn't fix this problem, and the Xorg log (/var/log/Xorg.0.log) reported that it couldn't find a screen. I assumed that it needed a BusID added to the video device description in /etc/X11/xorg.conf, but no bus-id value I tried solved that problem. The reconfiguration process wanted to use PCI:17:4:0, which was correct as far as I could tell, but that didn't work, either. The only thing that did work was to use an AGP card that did work, borrowed from another machine, and reconfiguring again. The problem with that solution is that the AGP video card has two non-standard video connectors, a DVI and an ADC, and the monitors we have do not use either of those connectors. I had to use the only DVIto15pin adaptor that we have to get the monitor to work. I couldn't find another AGP video card with a 15pin connector that worked.

There may be something that can be done in OF about the PCI bus-id for the pci video card, but I don't know what.

I'm not sure what else to do with it at this point. I guess I need to verify that the other AGP cards that didn't seem to work really don't work. If they DO actually work in another machine, then things only get more confusing.

22feb07

Macs with nVidia video
Ubuntu Edgy (6.10) installation does not like powerpc machines with nVidia controllers. The dome G4/700 is a case in point. There was another machine some time ago with a nVidia controller that didn't work, either. The current development version of Ubuntu Feisty Fawn (7.04) seems to work OK for installation, but 7.04 is still very young and unstable. It could be installed on machines with nVidia controllers, but selling any such machine would have to have a very clear disclaimer. Dave said the thought that was OK.

Disk replication
This process works for the simplest case:

  • source disk was installed on a machine where HD and CD assignments match the target machine
  • target machine has an ATI video controller


The process seems to be:

  • run the replication script
  • boot the new drive from OF, using boot device hd:,\\yaboot
  • modify /etc/yaboot.conf with the correct boot device, if necessary
  • run ybin or mkofboot


The goal is to be able to replicate with only two drives installed: A boot drive with partition images for several configurations, and a target drive. The replication script should take care of target configuration specification, probably with command line arguments or a configuration file. ybin or mkofboot should be able to write the boot things to a specific target partition, but the specified target configuration would have to include a particular boot device copmplete tree path, which would have to be known before hand. That in turn means that the boot device tree paths would have to be known for all possible target machines.


Ubuntu and PowerPC support
The Ubuntu Technical Board has downgraded its PowerPC support to unofficial. Popular interpretation of this is that Ubuntu will still produce the PowerPC version, but at a lower priority, will not hold up other version releases for specific PowerPC related problems. Freegeek can still use Edgy (6.10) for ATI video based Macs. I personally think that Dapper should be reconsidered. I haven't looked at Dapper on Macs, and I don't know what the earlier encountered problems were. See Macs with nVidia video above for nVidia video comments.