LBC Manual
RTFM
Before You Begin
- Make sure you've read what this is all about.
- There are some requirements for your computer system:
- At the moment, you need to run a Linux OS.
- There will be an initial download data volume of about 200MB.
- The collider needs about 550MB of disk space.
- The collider needs at least 770MB free RAM.
- For best CPU performance you need fast new CPU cores with AVX2 capability and high clock. You can get a GPU client which boosts performance even more. See a comparison of what performance to expect.
Download
Go to the download section. If you are already running Linux, grab the LBC script (a mere 65kB) and continue with the On Linux section. If you are running WIndows, you will need the LBC Appliance (around 1GB) - see next section.Installation
LBC Appliance
There is a ready-made LBC Appliance in case you do not want to install LBC on your system directly. This appliance is a VMware image with a pre-installed LBC on a fairly new 64bit Arch-Linux.
Depending on the free Vmware Player, you can run the LBC appliance on both Linux and Windows hosts.
While the appliance offers better encapsulation than a native Linux installation, you have to download about 1GB of data in 2-3 packages. On the other hand this comes already with a fairly new .blf file and all neccesary programs (gcc, xdelta3) preinstalled. The performance penalty of running ina a VM is negligible (~3%).
Here's how:
- Make sure your computer is 64bit and supports virtualization (which is enabled in the BIOS)
- Download and install the VMware Player (~75MB)
- Download the LBC Appliance (~1800MB)
- (optional) The archive is packed with 7-Zip. Download if you haven't a .7z unpacker already
- Unpack the archlinux.7z, it will need around 2GB on disk.
- Start VMware Player, choose the Archlinux Image, Press "Play"
- The Arch Linux base is from http://www.osboxes.org/arch-linux/ so Linux should boot and in the
login you enter
username: osboxes password: osboxes.org
- on the shell do
cd collider; ./LBC -x
On Linux
- Make sure the following packages are installed:
- perl (5.14 or newer) - probably preinstalled
- bzip2 - most probably preinstalled
- xdelta3 - look for a "xdelta3"-named package.
- libgmp-dev(el), The GNU Multiple Precision Arithmetic Library including header files (therefore the -dev or -devel)
- libssl-dev(el), OpenSSL Library including header files (therefore the -dev or -devel)
- A sane compilation toolchain: gcc, make
- You want to perform the installation as root or with "sudo". After install, "chown" all files to the user you want LBC to run as. Sissy.
- Download LBC into some suitable directory, go to that directory. e.g. via wget https://lbc.cryptoguru.org/static/client/LBC
- Continue in section All OS.
All OS
- In the LBC directory, start the client via
./LBC -h
- After a fair share of incomprehensible messages and ASCII vomiting, the help should appear. If it doesn't, simply enter LBC -h again.
- If the help appears, it also shows the Id of the client on your machine
- The final step is to perform a system check of LBC:
./LBC -x
This will look for any existing updates and install them. It will also benchmark the generator and check connection to the pool server. Warning! Do not expect any support if you omit this step.
Operation
System Usage
LBC is a CPU intensive application. If you tell it to use 4 cores, it will use these cores 100% non-stop. If you tell it to use 128 cores, it will also hog these 100%. On the other hand, while using these 128 cores, it will not grab more than 2.5GB memory, it will have virtually no disk IO and cause very little network traffic (couple bytes every couple minutes).
This allows it to operate LBC on machines that have e.g. high IO load, but spare CPU cycles without performance impact. Say your CPU has 4 cores and hyperthreading enabled (= 8 logical cores in total). If you let it run with the -c 4 argument, it will use only 4 cores, leaving the hyperthreaded cores for the system or your interactive work.
When you observe the running processes by issuing e.g. a "top" command, you will see something similar to this:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18146 root 20 0 515m 514m 512m R 105 0.4 0:26.50 ./gen-hrdcore-avx2-linux64 -I 0000000000000000000000000000000000000000000000000000ff4d29800001 -c 2933192 18148 root 20 0 515m 514m 512m R 105 0.4 0:26.50 ./gen-hrdcore-avx2-linux64 -I 0000000000000000000000000000000000000000000000000000ff4d2a800001 -c 8283468 18135 root 20 0 515m 514m 512m R 99 0.4 0:28.51 ./gen-hrdcore-avx2-linux64 -I 0000000000000000000000000000000000000000000000000000ff4d25800001 -c 4385968 18139 root 20 0 515m 514m 512m R 99 0.4 0:27.51 ./gen-hrdcore-avx2-linux64 -I 0000000000000000000000000000000000000000000000000000ff4d26800001 -c 9736244 18140 root 20 0 515m 514m 512m R 99 0.4 0:27.51 ./gen-hrdcore-avx2-linux64 -I 0000000000000000000000000000000000000000000000000000ff4d27800001 -c 3659580 ...
These are basically the generators as started by the LBC client (if you did e.g. -c 4 you should see 4 of these processes) and each generator uses pretty much like 100% - which means 1 core. the -I parameter is the key offset where the generator starts to generate and check its block of 224 keys. The -c parameter you see there is the challenge the LBC client gave the generator. (Note: this has nothing to do with the -c parameter you started the LBC client with - number of CPUs to use)
Security
LBC operation is secure and poses no threat to your computer system. Security is maintained where it matters: The underlying server infrastructure down to hardware level, no BTC infrastructure requirements, mutual client-server monitoring and validation. LBC has been used by more than 260 users (and counting) with no security incident since its inception. See thread @ bitcointalk for current news.
LBC server infrastructure is under tight control. No vague cloud services, no managed servers. The systems are fully and solely under the authors' control. They are kept up-to-date and are being monitored for any even remotely potential CVEs.
All program and data have sufficient checksums in place to prevent code or data tampering. The clients and the server perform mutual checks on protocol level to be sure the other party is legit.
Any non-standard behavior is met with rigorous cancellation of communication (both LBC server and LBC client), misbehaving clients (code tampering, excessive false positives, no PoW, excessive promise of work and undelivery) go fairly quick to a blacklist. Also, the LBC client validates the key generators tightly with a challenge-response protocol to defy PoW cheating.
The programs themself do not require any critical Bitcoin infrastructure on the machine. You do not need any blockchain data, or any wallet on a LBC-client machine. If security is paramount, you are encouraged to run LBC in a virtual machine or container to provide more encapsulation. There is some performance loss, but with a good VM configuration this can be kept at a minimum.
The LBC client is deparsed Perl source. While somewhat scattered, you can ultimately look at it in your text editor. It also checks its own source code to prevent code tampering. The LBC server also performs randomly a challenge-response protocol to the LBC client and will deny communication with a tampered client. You can try: Even if you add a single whitespace somewhere, the server will consider the client tampered and block communication. Warning! It is safe to do this once/twice, but unless you want to end in the pool blacklist, revert your change.
References
http://cpuboss.com/cpu/AMD-Athlon-64-X2-4400Generator Speed
The following are various benchmark results for given CPUs and generators. This should give you a good comparison to evaluate if your system works at reasonable speed. The keys/s number is a rough estimation of performance per core (*). Some of these may be outdated and your speed should always be at least as high.
CPU | Generator | keys/s |
---|---|---|
Intel Xeon E3-1505Mv5 @ 2.8GHz + 20% of Nvidia Quadro M2000M |
kardashev-skylake | ~5 700 000 |
Intel Xeon E3-1505Mv5 @ 2.8GHz | gen-hrdcore-skylake-linux64 | ~877 000 |
Intel i5-3570 @ 3.4GHz | kardashev-sandybridge | ~880 000 |
Intel i5-2500K @ 3.3GHz | gen-hrdcore-sse42-linux64 | ~598 000 |
Intel Xeon E5-2620v3 @ 2.4GHz | kardashev-haswell | ~590 000 |
Intel i7-4510U @ 3.1GHz | gen-hrdcore-avx2-linux64 | ~535 000 |
Intel Xeon X5672 @ 3.2GHz | kardashev-westmere | ~636 000 |
Intel i5-4460 @ 3.2GHz | gen-hrdcore-avx2-linux64 | ~520 000 |
Intel i5-4300U @ 2.9GHz | gen-hrdcore-avx2-linux64 | ~470 000 |
Intel i5-4690 @ 3.5GHz | gen-hrdcore-sse42-linux64 | ~442 000 |
Intel E5-2630Lv3 @ 1.8GHz | kardashev-haswell | ~450 000 |
Intel Xeon L5630 @ 2.13GHz | gen-hrdcore-sse42-linux64 | ~415 000 |
AMD Athlon64 X2 4400+ @ 2.2GHz | kardashev-generic | ~375 000 |
(*) Rough estimation means that your numbers using several cores may be above or below that number. Lower if the CPU is busy otherwise and probably also clocks down due to thermal constraints, Higher, because in a longer run the startup cost - present in the 1st benchmark run - are mitigated. I.e. the skylake numbers would suggest over 3.5Mkeys/s for 4 physical cores. The real yield is about 2.8Mkeys/s when all 4 physical cores are used, as they clock at max. 3.2GHz or even 2.8Ghz (contrary to 3.7GHz when one core is used).
As for hyper threading (HT) a.k.a. logical cores of the CPU: These give you only a marginal performance gain. Normally, HT is used to distribute load more efficiently on the CPU, but the LBC generators are pretty optimized already so they are using the physical cores to a greater extent than regular software. It is therefore advisable to use only the physical cores for the LBC generator.
GPU acceleration usually means your numbers per core will be about 7 times higher than a CPU-only key generation. This factor also may be higher or lower depending on other system constraints.
System Speed
A list of total speed in complete configurations:
System/Machine | Config | keys/s |
---|---|---|
AWS p2.8xlarge | 32 vCores Xeon v4, 8x K80 GPUs (50% each) | ~80-88M |
Lenovo P50 | 4 Cores E3-1505v5 + 85% Nvidia Quadro M2000M | ~22.6M |
AWS m4.16xlarge | 64 vCores Xeon v4 | ~23M |
AWS p2.xlarge | 4 vCores Xeon v4, 1x K80 GPUs | ~11M |