2020-11-12

The final NAS setup

This is a follow up article to my previous post on the NAS upgrade to the Biostar FX9830M motherboard. As soon as I received all required components, I spent the first available evening to assemble the new setup. As mentioned previously, I decided to utilize my old InWin BM639 case instead of buying a new one to save the costs. In the end, I believe, it may even stay as a permanent solution, since I am pretty satisfied with the end result. My final setup is:

Dell 540-BBGS 10Gbit SFP+ network adapter

Motherboard placement to the case

I hit the major obstacle as soon as I tried to place the motherboard inside the case. It is 1.3 cm wider than mini-ITX/mini-DTX standard, thus it was hitting internal PSU. The case hadn't enough space in between to fit. Fortunately, it provided an option to remove just the top part of the case, which gave me an opportunity to exchange the PSU position on the top of the case instead. The stability of such placement comes from the fact that cables are connected pretty tight to motherboard and the PSU has a slide at the bottom by design, which allows to fix it at one corner of the case. It is a workaround and it is not the best solution definitely, but it works pretty well. Once PSU obstacle was solved, the motherboard fit in perfectly. PCIe interface occupies the second extension slot by design, thus only single slot cards can fit (though, by removing HDD cage, likely two slot low profile cards can be used). There are possible alternatives for PSU placement as well I believe to avoid the external placement, but they likely require some modifications to the case. I may investigate those options some time in the future.

Biostar FX9830M VER6.0 motherboard

Placing the hard drives

The second obstacle was to fit two 3.5'' hard drives and one 2.5'' SSD inside. By design it has one 3.5'' cage for HDD. Besides that, it was supposed to have the 5.25'' ODD cage as well, which occupies a big part of the front side of the case, however I likely lost it over the years (I replaced it with the big fan years ago). Because of this, I needed to think out of the box a bit on how to utilize space and, if possible, to avoid external placement. Regarding the SSD drive, I was initially planning to replace it with the Sandisk Extreme Pro 3.0 USB flash drive placed internally using USB 3.0 pin header, but it appeared that pin header is protruding too much to the left, thus hitting the CPU fan. What is more, I wasn't sure if flash drive is not too long vertically as well. Because of this, I decided to stay with Kingston SATA SSD. It had its own challenges because of physically damaged SATA port, but I successfully placed it vertically on top of 3.5'' cage by using holes designed to fix 2.5'' drive. Vertical position was required to avoid the bending of drive's SATA port by the rounded metal edge on the top of the HDD cage. Fortunately, this position wasn't obstructed by the network card as well, though it makes access to it a bit more complicated. The first HDD was placed in the cage itself as designed by the motherboard, thus it didn't pose any issues. Finally, I fixed the second hard drive on the original position of the PSU by fixing it with two screws on a back side frame of the case. Thus, all the drives were successfully placed inside the case, leaving only PSU outside of it. In theory, I still can even add one more drive on the front side instead of the front fan. It may affect the airflow inside the case though, because of this, such setup would need careful testing, especially in the summertime.

Toshiba MG04ACA600E 6TB HDD

As a side fact, two SATA ports are provided by the CPU chipset, another two by ASMedia ASM1062 controller. Non of them provide RAID support. Like I mentioned before, the second SATA port is also connected with the NVMe slot, thus only one of them can be used at the same time.

Network card needs low profile bracket

The final issue was caused by the fact that Dell network card wasn't supplied with the low profile bracket, though it is easily replaceable by the design (standard screws are used). Because of this, I needed to order one from the eBay, while placing the card without it temporarily. I was concerned if it would not easily pop out of the PCIe slot, luckily, however, it stayed very firmly in the position even during cable insertion/removal process.

The final arrangement can be seen in the picture below (before cable management):

Software adjustments

Software wise the Artix Linux didn't boot out of the box, but it was solved simply by recreating initial ramdisk environment, which had also been the typical way to resolve an unsuccessful update process in the past. To my surprise, the SSD drive worked very stable on the new system, despite its damaged SATA port. I didn't experience any boot issues, I could use all the HDD utilities like smarctl, hddtemp without causing system to stall! What is more, it finally utilized its full bandwidth potential, which was very noticeable during upgrade process. 

As initially planned, I replaced NetworkManager service with the dhcpcd and netifrc combination. It was a pretty simple procedure, since only /etc/conf.d/net file required few additional lines to setup bonding interface by the provided example. Once network was ready, I tested copying files between two systems with 10Gbit network, it reached around 165MB/s, which is considerably better than previous speed ranging from 40 to 70MB/s depending on network controller, but likely limited to hard drive capabilities. Because of this, RAID0 setup may be beneficial to increase download speeds even more, just it doesn't seem necessary for me currently. It is also easy to notice snappier file browsing experience. Additionally, I used iperf3 network testing utility which reached ~6.25 Gbit/s between two systems. 

iperf3 testing results

Finally, I could remove all the workarounds applied for the previous system, including enabling the kernel updates again.

BIOS 

The motherboard uses UEFI BIOS with the typical graphical setup controlled by keyboard or mouse. It does not provide as much options as as you may expect from usual desktop gaming motherboard, but it has some specific options which are likely suitable for a business environment and power users (TPM, security, virtualization). For my specific case, I probably mainly missed more controls over fan speed and other power saving features. The motherboard fan is usually very silent but can become noisy on high CPU usage. Also, I didn't find an option to disable main AUDIO device (Realtek ALC887) (except the one from the GFX device). Dell network card supports EFI, thus I changed Network device to prefer UEFI boot instead of legacy one. BIOS provides an option to choose boot device by clicking F9 or enter BIOS flasher with F12 (no BIOS updates are available as of yet). In general, it is still an improvement over old legacy BIOS in Jetway motherboard but may disappoint more demanding people.

I also made a mistake, once I disabled internal NIC controller. It changed link's enumeration number from enp3 to enp2 which I failed to notice initially, thinking it's a possible BIOS bug and reporting it to Biostar. Actually, only simple network reconfiguration was needed.

FX9830M BIOS main screen

Conclusion

In conclusion, the full upgrade process from the hardware side to the software one went quite smoothly. The motherboard proved to be almost compatible with the Mini-DTX cases, with the PSU placement being a major concern. In my case, I resolved it quite easily due to highly modular case design. The final result is considerably stable and faster system with the big potential to utilize it in few different ways, which were not possible previously. I still plan several improvements like data encryption and cold backup in the near future. I am also waiting for the low profile bracket for the network card and a low profile COM bracket which will occupy the upper empty slot (I use serial console in some occasions) to complete the setup. Considering the successful setup with the InWin BM632 case, I started to have doubts about the need of replacing it with Fractal Design Node 304 or similar case, as I initially envisioned. I believe, I will likely look into all other options first before doing such purchase. As of today, the main concerns are to reintroduce double backups, better secure the data and utilize the new system for routine actions which I am doing either manually or using scripts. And hopefully, the system will serve me no less than the one it replaced!

2020-10-27

NAS upgrade to Biostar FX9830M board

Intro

Despite my recent projects and various minor hardware updates on the old NAS server, which you can read about in my previous blog posts, I have been already looking around for the major hardware update. I was not sure how long the search will take due to reasons I will explain shortly but the aging system was being pushed to its limits and it was only a matter of time before stability, performance, requirements, and other factors will start to drift away from the reasonable levels. In addition to those factors, I was constantly upgrading my other hardware, including gradual upgrade to 10Gbit based local network. Better utilization of it outside simple technology playground and testbed,  were complementing motivational factors to spend additional resources on NAS modernization, which would allow me to focus on its main purpose instead of putting most of efforts for workarounds and patches just to keep it running reasonably well. 

Inside Jetway JNF-76 based NAS system

The search

The market changed quite a bit since the time I bought Jetway JNF76 back in 2009. Introduced by VIA Technologies mini-ITX form factor was still quite strong, various solutions were available ranging from home to industrial/embedded market. Specialized online computer shops selling such hardware were still in business. Once I started looking for the new board a year or so ago I noticed that most of the stores I've used to buy such products are either gone or their business transformed into different directions. VIA Technologies itself also generally left the consumer market and x86 CPU business (I know that they have joint venture in China, but those solutions have zero availability outside China and I don't associate them directly with VIA anymore. As a proof of that, just recently VIA announced plans to sell its part in the joint company). It's not that the market completely vanished, there are new stores which sell similar products in the sea of other stuff but they are not that focused and specialized as before. On top of that, ARM based solutions started to push into small PC market but I am not completely ready to go in this direction yet and they usually don't have required specifications. Finally, this time I really limited my budget which forced me to look to home/end user solutions instead of ones server/embedded ones. Unfortunately, in this space motherboards surprisingly went away from mini-ITX form factor besides few pretty expensive gaming focused solutions or very cheap Atom CPU based ones, which have quite limited specifications. Mini-PCs seemingly moved mainly to custom size barebones or NUCs. Because of this, the right solution was really extremely hard to find with the requirements I expected to fulfill:

  • At least 4 SATA III ports
  • PCIe 2.0 x8 electrically (not just a physical slot)
  • Energy efficient CPU, preferably soldered to motherboard and passively cooled.
  • At least two USB 3.0 or higher ports
  • Ideally mini-ITX form factor
  • Small price (< 200 euros)
  • Readily available

Biostar FX9830M motherboard took my attention as soon as it was announced early spring this year. It looked almost like an ideal solution, having PCIe x16 slot (x8 electrically), 4xSATA ports + M.2 slot, 2xUSB 3.0 (marketed as USB3.2 Gen1) plus internal header for 2 more. It checked most important ticks with some caveats: 

  • Adding SSD drive to M.2 port disables one SATA port, thus making me to choose between setup of 4 hard drives + USB 3.0 system drive or 3 HDDs + 1 fast system SSD in the M.2 port.
  • SATA controllers do not support RAID.
  • AMD FX-9830P CPU TDP is 35W, which is a bit high (VIA U2300 is 5W TDP CPU + 5W for VX800 chipset). In addition, it is actively cooled.
  • Serial port is not available in the backpanel (COM header exists though).
  • Actual availability was completely missing till early/mid October both in my country or in the international online stores like Amazon. By the time of writing, it is still looks scarcely available, at least in Europe.
  • The worst issue that I initially overlooked, that it is not mini-ITX form factor board, but a little bit bigger one, having 200mmx183mm dimensions instead of 170x170mm. 

Nevertheless, once I finally found a very small batch of these motherboards available in the local computer and electronics shop few weeks ago, I didn't hesitate much and decided to build new NAS system around this model. I deemed the points mentioned above non significant to my envisioned setup and adjustable in one way or another. The price of the board was a bit above 100 euros.

Biostar FX9830M motherboard

Overcoming the size issue

The motherboard size is not a microATX as claimed by Biostar, thus I did some research if it can fit some small cases (bigger than current one but smaller than microATX). It almost conforms with mini-DTX size, but requires extra 1.3cm width, since mentioned form factor size is 203x170 mm (width matches with mini-ITX). Since the motherboard doesn't conform to any actual form factor, Biostar simply labeled it as microATX one, though full size microATX (244x244 mm or 4 expansion slots) case would be definitely a waste. It would definitely fit into DTX case but this AMD proposed form factor didn't gain popularity, thus marketing team probably decided to avoid mentioning it. Considering the size of the board, it became clear that it likely fits into some mini-ITX cases with two expansion slots (considering the mentioned a little extra width is available which may not be always the case). Fractal Design Node 304 solution seems to be one of the best options to consider, since it officially supports mini-DTX and can accommodate up to 6 hard drives, which is perfect for my NAS server needs. From the videos and photos available I am not completely sure if extra width won't be hindered by PSU frame, but the frame looks removable in the worst case (combined with SFX/TFX/picoPSU instead of traditional ATX one). To save initial costs though, I decided to utilize my old InWin BM639 case, which was used for my VIA EPIA-M900 board based computer in the past. It happens to have two slots and fits the board with some workarounds.

InWin BM632 case

The network setup

As mentioned before, one of the main motivational reasons was ability to utilize local 10GBit network. This decision was accelerated by few important recent purchases. The major one was MikroTik CRS309-1G-8S+IN 8 port SFP+ 10Gbit switch. It was supplemented by Asus XG-C100F Aquantia AQC100 based SFP+ network adapter. The Asus card was important because it is supported by NetBSD (and FreeBSD). It enabled me to completely switch to 10GBit interface on my main computer. In addition, I have previously acquired Dell QLogic 57810 10Gbit double SFP+ port Ethernet card. This particular network adapter will be transferred to NAS server, thus it dictated the requirement for PCIe 2.0 x8 slot. Since it has two ports, network aggregation can be utilized in similar manner as the current 1Gbit setup (Mikrotik switch supports interface bonding from its side as well). Finally, I still have Tehuti TN4010 based Edimax EN-9320SFP+ adapter bought back in 2018, which allows me to connect one more computer when/if needed.

Mikrotik CRS309-1G-8S+IN

Asus XG-C100F

Storage setup

The original plan was just to transfer my original setup: 2x6TB hard drives plus 2x3TB drives in LVM RAID0 setup for the second backup. Backup drives are initially synchronized using rsync software once a day, though at different time frame for each backup. Unfortunately, just before the purchase of the new board one of my old 3TB HDDs died. Thus, only new 6TB drives will be used from day one. In the future, I plan to go back to the double backup, however I am not sure yet what will be the final setup in this case (I can see few options: manual sync as before, or RAID0+1 setup (using LVM), or separate cold backup using USB3.0 or network). For the system drive I currently planning to switch from SATA SSD, to USB3.0 SSD drive placed inside computer case using Delock USB3.0 pin header. This decision is mainly driven by damaged SATA port on current system SSD drive, which I believe is causing some stability issues. Besides that, initial plan was to utilize all 4 SATA ports by data HDDs until recent failure. 

Software wise the main operating system will stay Artix Linux with OpenRC init system. Additionally, I am planning to introduce full data encryption. Finally, considering the need to configure new network interfaces, I may switch to dhcpcd/netifrc instead NetworkManager for DHCP client and network management to simplify my software stack and use partially familiar tools (dhcpcd is default DHCP client in NetBSD).

Bonus improvements

Very important expectation is increased stability of the system. Though current system wasn't particularly unstable once booted, but it had serious issues on the boot process itself, mainly because of ADPE4S-PB daughterboard and likely also because of damaged SATA port on system SSD drive. It was partially circumvented by using my very old external PCI SATA controller but it had limited system performance (theoretically SATA I but in practice twice or more slower because of drivers), still wasn't 100% stable and had limited expansion capabilities. FX8300M motherboard has mature AMD platform which is well supported by Linux kernel, thus stability should not be an issue anymore. Same applies to Dell network card which I have tested in my personal computer for several months. As a bonus, much higher CPU performance will allow me to focus and NAS related configurations (encryption, NFS, samba, RAID) and various additional background services and scripts (rsync, fossil-scm, git, torrents, streaming, filtering, folding@home at certain occasions, etc). Additional perks include more modern UEFI BIOS with reacher configuration and boot options, possibly higher hard drive capacity support (though I haven't hit the limit with JNF-76), USB 3.0 for additional flexibility on external storage, SATA 3 instead of SATA 2 (VIA chipsets have never upgraded to SATA 3 for some reason). Very fast PCIe 3.0 based SSD in M.2 slot is also an option for system drive in the future, that is in case I won't need all 4 SATA ports. And finally, the board supports up to 32 GB of RAM (initially opted 16GB), which should be more than enough for NAS server including any additional services. I really hope to use this system for similar timeframe as original one, which is around 10 years or more. 

 

I am planning to write a follow up article once setup is ready.

2020-09-22

Information about Vortex86 EX2 SoC

This year I got a chance to acquire one of the first DM&P Vortex86EX2 SoC based system (ICOP VEX2-6427-5C4NE specifically). Comparably to my article about DM&P VortexDX3 SoC, I assembled the short article with the information which I had gathered from sources like dmesg, /proc/cpuinfo (NetBSD, SparkyLinux),  datasheets, official web sources and images.


Below you can find the main Vortex86EX2 specifications (mainly taken from the official page, but supplemented by some information from BIOS and operating system messages, as well as hardware info):

  • Identified as DM&P A9133 in BIOS.
  • Master/slave design, two independent cores (as per my understanding, they cannot be used in the same Operating system at the same time).
  • CPUID is 0x38504d44 (not sure yet if only the master one only or both).
  • Frequency is up to 600MHz for the main core and 400MHz for the secondary one.
  • 6-stage pipeline (both).
  • 2x DMA controller (both).
  • Integrated FPU (both).
  • 16KB I-Cache, 16KB D-Cache (both), 4-way 128KB L2 Cache with write through or write back policy (main only).
  • DDR3 control interface, up to 2GB RAM, 16-bit data bus, 2 ranks, clock support up to 400MHz, supports ECC.
  • Real time clock (both).
Shared:
  • 65 nm manufacturing process.
  • HD-Audio (Realtek ALC262 is used in my system).
  • 2x USB 2.0 (ID: 6061) + USB device (PCI ID:1061).
  • ISA bus interface (PCI ID: 6013):
    • AT clock programmable.
    • 8/16 Bit ISA device with Zero-Wait-State.
  • Up to 3 SD/MMC cards (my board has one on the SoC module and one on the board). Supports SDSC, SDHC and SDXC. eMMC is also supported, up to version 5.1.
  • 2x CANbus 2.0A/2.0B.
  • 2x SPI controller.
  • Up to 10x COM ports.
  • 2x Fast Ethernet (10/100Mbps), my board has 1xR6040 model 6.
  • 3x Motion Control Interface.
  • Crossbar interface.
  • 128 programmable I/O pins (GPIO).
  • Temperature sensor.
  • Package: 19x19mm, LFBGA-441.
  • Operating temperature -40 to 85℃.
Vortex86EX2 System Block Diagram
Vortex86EX2 Specs and Block Diagram

According to /proc/cpuinfo CPU supports CMPXCHG8B, CMOV, FXSAVE/FXSTOR instructions, MMX and SSE, SSE2 and SSSE3 extensions, even NX bit (using PAE?), thus seems to be the most advanced DM&P SoC in this regard. It also supports page size extension (4MB), physical address extension (PAE),  SYSENTER/SYSEXIT instructions and time stamp counter. Because of above it can be likely classified as i686-compatible CPU (it loads i686 Linux kernel successfully). FPU is built-in on both cores. It does not have integrated GPU, thus external is needed (company is offering Vortex86VGA mini PCI-E card solution). My board has 4 USB 2.0 ports, but two of them provided by Genesys Logic's GL850G hub controller (USB ID: 0x0608). Additional information also can be found in CWID report in cpu-world.com (report has apparently incorrect halved cache sizes). /proc/cpuinfo can be found below.

/proc/cpuinfo (combined from NetBSD 9.99.69/Sparky Linux 4.19.0-6-686):

processor    : 0
vendor_id    : Vortex86 SoC
cpu family    : 6
model        : 0
model name    : Vortex86EX2
stepping    : 2
cpu MHz        : 600.032
physical id    : 0
siblings    : 1
core id        : 0
cpu cores    : 1
apicid    : 0
initial apicid    : 0
fdiv_bug        : no
f00f_bug    : no
coma_bug    : no
fpu        : yes
fpu_exception    : yes
cpuid level    : 3
wp        : yes
flags        : fpu pse tsc msr pae cx8 apic sep pge cmov pat mmx fxsr sse sse2 nx cpuid pni ssse3
bogomips    : 1200.06
clflush size    : 32
cache_alignment    : 32
address sizes    : 36 bits physical, 32 bits virtual
power management:


CPU-Z 1.94.0 screenshot
(through wine, L2 is not detected properly)

Note: There is no way to boot into slave CPU on my board, so all actual information is based on master one.

2020-05-06

Beware of ahci module change from dynamic to built-in on Artix (Arch) Linux


Recently, after one of the usual system updates, I suddenly ended up with an unbootable system on my Artix based NAS server. Since the system's boot process is not the most stable in general, initially, I thought it was yet another "moody" day for the system caused by the Marvell controller on ADPE4S-PB daughterboard. However, the boot is usually successful in several attempts in this case. If not, rarely this was caused by failed initramfs generation during the update process. In such situations, I was regenerating it by booting into Artix (or previously Arch) installation system and chrooting into the main system (or using fallback initramfs boot option). This procedure requires reconnecting the system drive to VIA based controller. Nevertheless, nothing was helping to bring it back to life. I started to suspect that I was dealing with the new issue this time. Initial speculation was geared to failing old hardware, but in the end it appeared to be a software based issue. Since then, I successfully booted into the system using the VIA integrated controller, it was getting obvious that modules configuration was not being applied for some reason. I confirmed that by looking at the lspci output which didn't show the AHCI driver being applied for the Marvell controller.


A little investigation revealed that the new kernel has ahci.ko.xz and libahci.ko.xz module files missing in /usr/lib/modules//kernel/drives/ata directory. They were present there before the upgrade. Since I depend on the AHCI driver specific property, the reason of the failure was pretty clear. Despite that, I didn't know yet why those modules had been missing. Regardless, I was looking for the fastest way to restore my system first. The first solution I came up with is to revert to the previous kernel. It appeared to be possible due to the fact that pacman package manager is keeping older packages in pacman cache. Using command "pacman -U /var/cache/pacman/pkg/linux-5.5.10.artix1-1-x86_64.pkg.tar.xz" downgraded kernel and, fortunately, the system was bootable again.

The downgrade solution was supposed to be temporary, since I can't ignore upgrades forever. Initially, I assumed that the missing modules were a mistake so I filed a bug report. However, it was soon closed with explanation that ahci modules are built-in in the kernel and it's not a bug. I believe this change happened starting 5.6 kernel series since 5.5 based kernels were still working for me. Because of this, I made an attempt to apply corresponding configuration of dynamic modules on kernel. Thankfully to the good Arch Linux online documentation, it was easy to find required information. This page describes how to pass module parameters to the kernel and this one describes the GRUB bootloader configuration. To pass module parameter "module.param_name=param_value" needs to be passed to the kernel line. Blacklisting is performed by passing "module_blacklist=module_name" parameter. In my specific case, I needed to pass marvell_enable=1 property value to ahci driver and blacklist pata_marvell module. So, these steps needed to be performed in my case:
  • sudo vi /etc/default/grub
    • change GRUB_CMDLINE_LINUX line to:
    • GRUB_CMDLINE_LINUX="ahci.marvell_enable=1 module_blacklist=pata_marvell"
  • regenerate grub.cfg by running sudo grub-mkconfig -o /boot/grub/grub.cfg
It should be safe to keep both: grub configuration and previous configuration for dynamic modules. They should not interfere with each other and would be ignored depending on using either built-in or dynamic module.

Unfortunately, this solution didn't work as well as expected. Though I was able to boot into the system, the success ratio decreased to an unbearable level. Only 1 out of 5 to 7 attempts were partially successful. By that, I mean the system booted and I could interact with it, however, none of the attempts initialized all the hard drives correctly. It was either system one or one of other two hard drives in LVM RAID failing. Thus, RAID volume wasn't mounted at best, or system was failing to boot at worst. Quite often, the system disk was still not recognized early in the boot process leading to rescue shell. Because of that, I was forced to revert to the old kernel again.

Considering that I didn't manage to fix the issue, I am not sure if there is a workable and stable solution at this point. Manually built kernel with modularized ahci module may help, but it would mean that I would need to track kernel upgrades myself. Moreover, building Linux kernel is not a very trivial process as I wish it should be, so it is not a viable option for me. As a temporary solution I can completely disable Linux kernel upgrades and keep the other software up-to-date. However, as a long term solution, it may force me to look for another distribution which still uses AHCI as module or even to consider a complete hardware update. Only time will tell which one will be easier to implement.

In conclusion, if Arch Linux based system is used along with Marvell 88SE6145 SATA controller (or any other marvell controller which requires ahci module specific configuration), I currently advice to refrain from upgrading to 5.6.x kernel. One can try to experiment with kernel parameters as described above, however, it is advisable to make a backup image of the system beforehand, so it can be easily restored. The downgrade path may not always be successful because of other dependencies and should not be relied on.

2020-02-29

2019 in review


Starting last year I resumed writing yearly review after a long break. Since 2019 was no less active in various personal computer related projects, I believe it may start a tradition of such retrospective reviews for the years to come. So without further ado, I will move on the last year's activities.

Downgrade from Radeon RX 460 to Radeon R7 370

For almost two years, I used the AMD Radeon RX 460 GPU based Asus Dual RX460 OC edition graphics card. However, since 2019, I started using NetBSD much more regularly alongside with Manjaro Linux (which is still my primary OS) and, unfortunately, Radeon RX 460 GPU is unsupported by the former. Because of that, I decided to downgrade to an older generation hardware, so I can be comfortable in using them. After a broad research, I opted for the GIGABYTE GV-R737WF2OC-2GD which is a Radeon R7 370 based card. R7 370 uses Pitcairn Pro GPU core based on the 1st GCN architecture. The main motivation behind choosing this specific model was its similar performance to RX 460, its reasonable price and its availability in stock at local stores.

Despite some fears, the card didn't disappoint. NetBSD immediately recognized the GPU and applied proper drivers giving the expected hardware acceleration. Because of this, I started using NetBSD on my desktop quite regularly. To my surprise, it visually felt more responsive than RX 460 on Linux as well. Despite being an older model, its performance is actually at par with the replaced card in general usage, except possibly increased power usage.

Dell QLogic 57810 10G Ethernet card

Since my router supports one SFP+ 10Gbit Ethernet port, I was curious to try it out and bought the Edimax EN-9320SFP+ 10Gbit network card back in 2018. Though, being the cheapest 10Gbit solution at that time with the smallest card around, it had few annoying flaws. The card was based on Tehuti TN4010 NIC, which is poorly supported by the alternative operating systems except Windows. Any of the BSDs don't have a driver at all. Linux has one, but not in the mainline kernel. Thus, the Linux driver needs to be compiled and installed manually, and the process needs to be repeated almost after every kernel update (which happens relatively often). In the end, the Linux driver still had at least one issue - if computer goes to standby mode, NIC stops working after resume with only reboot bringing it back. Nevertheless, I was not planning to replace it, since alternatives were too expensive (especially Intel based solutions). Btw, Edimax also have never updated the driver in their page, I was using the one from github.

Unexpectedly though, one local shop offered Dell QLogic 57810 (Dell part 540-BBGS) dual port 10Gbit network card for a sizable discount. I checked that it was supported by at least FreeBSD, the Linux kernel had mainline kernel driver as well, so I took the bait and bought it. The card itself is much bigger than Edimax, having an active cooler and requires PCI-E 2.0 x8 slot instead of PCI-E 2.0 x4. On the other hand, it is a way more professional product having sophisticated firmware and two SFP+ ports. Once installed, it worked out of the box in Windows, Linux and FreeBSD, with no manual work required. Contrary to Tehuti NIC, it had no issues on resume in Linux as well. In general, I believe this purchase was pretty successful, except the fact that it is not supported by NetBSD.

I dream to port the FreeBSD driver to NetBSD, however, it may be too complicated task, since the driver is huge. Nevertheless, I am planning to make an attempt this year (sometime in 2020) and see what happens.
Dell QLogic 57810 dual SFP+ Ethernet controller

Toshiba MG04ACA600E and D-Link DGE-528T

Yet another hardware update was already extensively described in my blog. Though, the main goal was just to upgrade from 3TB hard drives to 6TB ones in my NAS server, it ended up with the bigger project from migration to Artix Linux to LVM RAID configuration on recycled 3TB drives.

The new hard drives were two identical Toshiba MG04ACA600E model drives which showed quite visible performance improvement in read operations but write speed seemingly was restricted by driver issues and showed only minimal improvement compared to older drives. Thanks to LVM RAID0 setup I also reused older 3TB drives as additional back-up storage.

Realtek RTL8169SC based D-Link DGE-528T network card was used to circumvent loss of the second Ethernet port. Together with integrated Realtek 8111C NIC it showed worse network speed over Intel based daughterboard card, but it was expected and acceptable downgrade.

Toshiba MG04ACA600E

Sanwa MA-TB41S trackball

For around ten years I was successfully using Kensington Optical Orbit trackball. It was bought in Sweden by my former colleague and friend. Initially I thought that it will stay only as an experiment, but proved to be one of the best devices I've ever used. Unfortunately, the right button started to fail so badly by the end of the last year, that I was finally forced to look for the possible alternative. Despite pretty limited trackball options, I must admit it was one of most challenging decisions on hardware selection process. The main reason for that was the lack of possibility to try them physically. I believe, there is a great risk of to buy uncomfortable trackball from the images only. It doesn't matter how much reviews you will read, there are too many factors which can affect the comfort, from your hand size to device quality and sensitivity. That is probably the reason why you can find so many contradicting opinions on the same product.

Theoretically I could even buy the same model, since Kensington is still selling it together with few other long-living models, but I wanted to find one with the scroll wheel. After long research I narrowed down the options to these three models:
Though, Sanwa trackball was my last choice, I ended up buying it because of difficulty to find Elecom model in Europe, and relatively bad reviews regarding the quality of Kensington trackball buttons. Unfortunately, I can't tell yet if I made a right choice... During, the first month I was really disappointed. The trackball appeared to be a bit too big for my hand, making it difficult to reach buttons comfortably. Quite often I was accidentally moving the ball before clicking mouse buttons, thus easily missing intended button or other UI element. Scroll wheel was too slow, too big and clunky. My hand was constantly getting tired. Nevertheless, I started to get used to it over the time, thus decreasing the strain of the hand and increasing precision. Though, the scroll wheel still stays as the biggest inconvenience, since it was the major feature I had been looking for. It works, but scrolling is very slow and reaction time after rotating the wheel visibly lags behind. Furthermore, my thumb gets painful and tired easily, even over the time I haven't really gotten used to it. Honestly, I am not sure, if I will keeping using it, I may switch to a regular mouse or try the Kensington Orbit trackball with the scroll ring. It is probably the most unsuccessful hardware purchase in 2019.

NetBSD

I was quite active in the NetBSD project community this year: I managed to port IC Plus IP1000/1001 PHY driver from OpenBSD (used by many VIA EPIA boards, also IC Plus own network cards), fixed support for VIA VX800 (and possibly VX855, VT8237S) SATA controller (previously partially working in IDE mode only), 4World USB to Serial adapter (already mentioned in 2018 review), found the fix for the Biostar X370GT5 SATA controller locking issue, requested to pull-up the fix for D-Link DUB1312 USB network card to NetBSD 8 release. In addition, I noticed and identified the applied fix for the long standing bug on Ryzen system, where the OS was failing to identify a second SATA hard drive. This bug was causing to stall even UEFI POST process on reboot. Fortunately, this fix made it into NetBSD 9.0 release. Also I noticed some typos and white-space issues in the code. Finally, I updated the seemingly abandoned Codelite IDE package in pkgsrc to build the latest stable version at that time (from 9.1 to 13.0). Despite these successes, I also had some let-downs. I couldn't fix the USB and Ethernet issues on eBox 3352DX3-AP computer. I found a workaround to make Ethernet controller work, in case SMP and ACPI is disabled, but the proper fix is still unclear. Besides that, once suddenly Fnatic Gear Rush keyboard stopped working, with the help of NetBSD developer I identified that it was caused by accidental switch to 6KRO mode. It appeared to be working in NKRO mode only (which was the opposite up to NetBSD 7.1 release). Hopefully, it can be fixed to work on both modes in the future. Seeing quite a successful last year, I hope that I will be able to solve some issues in 2020 as well.

eBox 3352DX3-AP 

The last chapter I will dedicate to some insights on how to run BSD system on eBox 3352DX3-AP VortexDX3 based system, since I promised that on the last review.

FreeBSD 12.0 works the best out of BSDs, which can perfectly boot on default BIOS configuration. The only issue is the integrated network controller (R6040) which fails to work. This issue is common to all BSDs, since drivers are based on FreeBSD one. Similarly to the described below workaround on NetBSD, it may be also applied for the FreeBSD. By removing all the vte_reset() calls in the driver and recompiling a custom kernel, it should make the network work. Unfortunately, the latest release at time of writing (12.1) introduced some regression which causes instant failure on boot. I submitted the bug report but I don't expect it to be resolved soon. So, in case FreeBSD is a preferred choice, I recommend to run 12.0 release. Alternatively to editing and compiling the custom kernel, USB based network controllers can be used to enable the network.

OpenBSD is probably on the worst situation. I identified only one way to boot into the system without kernel recompilation by disabling "ACPI aware OS" option in BIOS. However, neither USB nor Ethernet works, which renders the system usefulness limited to some local automatic jobs only. It can boot with ACPI enabled/IDE in legacy mode by recompiling kernel without DIAGNOSTIC option. SMP works in both cases. If you want to work with OpenBSD for any reason, I would recommend to look for the system with COM ports (or try to solder the one).

NetBSD support is somewhere in the middle among the three. In order to boot the system with "ACPI aware OS" option enabled in BIOS, IDE should be set to "legacy mode". Otherwise, various timeout exceptions will occur and system will fail to boot. Similarly to OpenBSD, neither USB nor network will work in this case but both CPU cores (SMP) will work without requirement to rebuild the kernel. Booting with ACPI and SMP disabled (boot -12) will give USB support. Furthermore, it is possible to make network work in this case as well, but removing vte_reset() calls in if_vte.c (R6040 driver) and rebuilding your custom kernel. In case loosing SMP and ACPI are acceptable options, it makes a fully working system. Unfortunately, a workaround for the network didn't work for OpenBSD (more specifically, it actually does work, but network still fails to connect due to some other issue).

Summary and 2020

2019 was a pretty good year in the hardware/software context. Most of my projects went smoothly, and newly acquired hardware met or even exceeded expectations (except the trackball). It was also the most active year in NetBSD for me. Besides starting using it as Desktop operating system quite regularly, I managed to provide several patches to improve hardware support and provided my first contribution to pkgsrc project. I hope to keep this pace in 2020 as well, though time-constraint is my main enemy. Hardware wise, I don't have specific plans for 2020. I may upgrade to a newer Ryzen CPU and upgrade some other parts but non of these are a high priority. Like last year, such decisions probably will depend on pricing and unexpected deals. I do plan to revive my Alpha CPU based Microway PC164 Screamer system and I already have bought "old new" 8x32MB modules of SIMM RAM (Samsung KMM53616000AK-6) for it. Considering all these future plans, I believe that I will write at least a few articles this year, which I hope will be an interesting read.

2020-02-19

Kwort Linux 4.3.4 review

In my endeavor to revisit lightweight Linux distributions I looked back on my old dream to try CRUX Linux distribution. Since my last attempt ended up in failure, I decided to look if there were any CRUX-based distributions around with the simplified installation process. With the help of DistroWatch.com filter I found that only two CRUX-based distributions existed. Between two of them, the Kwort Linux seemed like the one I may have been looking for. There were few uncertainties though:
  • It wasn't clear if I would also need to build kernel during installation process similar to CRUX distribution.
  • At that time the project homepage was down despite its relative recent release (2019-06-16), casting doubts if it will be maintained in the future.
  • I wasn't sure how distribution's kpkg package manager operates, and if it is still maintained. Finally, I didn't know if I would be able to use CRUX ports system alongside with it. I believe together with BSD style init system it makes an essence of CRUX Linux and support for it was one of requirements to CRUX based distribution as well.
Nevertheless, I downloaded the ISO image from linuxtracker.org with the intention to try it out on my VIA EPIA-M900 motherboard later on. The ISO image is around ~917 MB in size. Fortunately, before my attempt, the homepage was partially restored with the documentation available in github (by the time I finished an article, the homepage is fully recovered and updated). It provided answers to some of those uncertainties except maintainability prospects of this small distribution. CRUX ports appeared to be supported to a certain extent, installation process didn't require to recompile the kernel as well. Furthermore, I also checked that kpkg mirrors were still updated regularly. All those factors combined gave a decent reason to try Kwort Linux.

The Kwort Linux did not have a "real" installer so to speak. Once booted, it automatically logins into console and gives a long message on the installation process. It can be always reprinted using "helpinstall" command. All the steps are manual: 1) first step is to partition the hard drive using one of fdisk/cfdisk (for MBR) or gdisk/cgdisk (for GPT) utilities. In my case I didn't need to partition hard drive since I just reused partitions used by previously installed SparkyLinux. 2) format partitions using mkfs. command; 3) mount those partitions to /mnt/install/ for root and /mnt/install/ for other partitions (if any); 4) run pkgsinstall command to install base packages; 5) use jumpOS command to chroot to your newly installed system; 6) finally configure your system by editing /etc/fstab, /etc/rc.conf (vim editor was available), creating users and installing bootloader (lilo or grub2 options are available). After all these steps, system installation is finished and computer can be rebooted.

In my case everything went smoothly and the newly installed system booted successfully. By default, kpkg package manager doesn't have any repositories, so the first command requires to install one. europa.kdb is available after installation and can be installed using kpkg instkdb /root/europa.kdb command. Alternative ctrl-c.kdb mirror was available here. After that, package manager is pretty simple to use kpkg update && kpkg upgrade will upgrade the system. Unfortunately, the package manager doesn't handle dependency management, thus, issues appeared right after the first upgrade. Several applications were failing to start citing missing library files. In all cases, I solved it by reinstalling those libraries. However, the challenge was to find the right package, as at times, few libraries were needed to be reinstalled before application started to work again. Another issue is that kpkg is pretty limited. During the time of writing this article, it had 203 packages only.

To partially circumvent missing packages, CRUX ports can be used. To achieve that, initially you need to install packages from CD image kpkg install /mnt/cdrom/packages_more/ports/*.tar.gz. After that, just refer to CRUX ports documentation. Unfortunately, not everything successfully compiled too and during kpkg upgrade, packages may interfere if versions doesn't match. Nevertheless, I successfully used ports system to install some applications unavailable in kpkg package manager mirrors (like wget). As a last resort, I compiled some packages manually from original repositories (like openchrome video driver).

To run Xorg you just need to run startx command. It did not seem that there's any display manager available as well. The default window manager is Openbox combine with conky and tint2. There are some distribution specific shortcuts like alt+z to lunch urxvt or alt+x to launch the browser. Actually, no alternative to Openbox was available in kpkg, though ports system provided some.

Probably, the final point is that though the kernel binary was provided during installation, no kernel updates are available. So, in order to update it, compiling manually is necessary. The installer provides 4.19.46 version in the ISO image. I successfully updated to 4.19.88 soon after installation.

In conclusion, I can say that Kwort is a really different distribution compared to the mainstream. It requires more manual work and research but in turn it gives one of the leanest systems available, fast boot process and complete control in your hands. It is suitable mainly for the experienced Linux user or the person who is not afraid to tinker with his/her system. It can be a perfect distribution for your secondary PCs, testing and rescue purposes or on dual boot as well. For desktop usage, one should have quite minimal requirements, since running typical heavy weight applications or games may be a challenging task. Here there is a nice response from a distribution creator and developer, along with its features and goals. It also has an IRC channel on OFTC network where you can reach him directly.
Default Kwort Linux 4.3.4 desktop

2020-01-27

Beware of QAFF CPU on X79 platform

This article is probably not very relevant anymore, considering the age of the platform and rarity of the CPU, but nevertheless, it can be useful for historical purposes or if someone happens to have the CPU mentioned below. Back in 2012 I bought X79-based Sandy Bridge-EP platform (based on ASRock X79 Extreme4-M motherboard) together with the engineering sample CPU having a QAFF S-Spec (supposedly 4-way Xeon E5 which doesn't have corresponding production item). Though the CPU was recognized by the motherboard and system initialized successfully, over time however, it proved to be quite unstable and stressful to use. Two main issues were:
  • Initial system boot and even reboot process could randomly freeze with a different number of BIOS beeps or motherboard boot status codes (Extreme-4M has a specific LED indicator for them). Probably the most common final code before freeze was 0x67 - CPU DXE initialization (CPU module specific). I believe it could have been RAM detection/initialization issues. Unfortunately, I couldn't find any pattern to reproduce the failure consistently.
  • PCI-E graphics cards with v3.0 specification (possibly v2.0 as well) lead to the failure of the OS boot process into graphical environment (Windows 7, Gnome on Manjaro Linux, etc). Reducing PCI-E mode to version 1 in BIOS settings was partially solving the issue by stabilizing the boot process. In this case however, the system had an occasional performance degradation and/or random lockouts. I was using Radeon RX460 graphics card, but I found a MSI forum thread confirming the similar issue with different graphics cards (including AMD and Nvidia models).
The first issue is hard to investigate because of its inconsistent behaviour, however, the system usually worked better with two RAM modules instead of four, in my experience. Shuffling RAM modules between slots occasionally was solving the issue, but temporarily only.

The second problem was easier to identify since the problems started soon after I replaced older Radeon 2400PRO based graphics card with more modern Radeon RX460 one. Windows boot process locked out immediately on first reboot after AMD drivers were installed. Linux boot process froze as well on loading display server (X.Org). In both cases, the only way to continue was to reset the system. By default, BIOS was selecting PCI-E v2 for the graphics card. Reducing PCI-E to version 1 helped to load the desktop environment in Linux but system performance was unstable up to complete system lock out. Windows was still crashing regardless of the PCI-E mode set in BIOS. Explicitly setting PCI-E v3.0 in BIOS was rendering system unbootable until CMOS was cleared. It seems v3 was either not supported by CPU at all or it didn't work in combination with my motherboard.

X79-based systems are already considered legacy since long ago and it is not very likely that someone will decide to build it these days, especially with a quite rare engineering sample CPU. In this case though, my recommendation is to avoid QAFF S-Spec CPU and choose many other options supported by the motherboard.