Compare commits

..

5 Commits

Author SHA1 Message Date
Jay D Dee
f3333b0070 v3.16.2 2021-04-08 18:09:31 -04:00
Jay D Dee
902ec046dd v3.16.1 2021-03-24 18:24:20 -04:00
Jay D Dee
d0b4941321 v3.16.0 2021-03-19 15:45:32 -04:00
Jay D Dee
40089428c5 v3.15.7 2021-03-08 22:44:44 -05:00
Jay D Dee
dc6b007a18 v3.15.6 2021-02-12 15:16:53 -05:00
46 changed files with 3186 additions and 2794 deletions

View File

@@ -1,5 +1,9 @@
Instructions for compiling cpuminer-opt for Windows.
Thwaw intructions nay be out of date. Please consult the wiki for
the latest:
https://github.com/JayDDee/cpuminer-opt/wiki/Compiling-from-source
Windows compilation using Visual Studio is not supported. Mingw64 is
used on a Linux system (bare metal or virtual machine) to cross-compile
@@ -24,79 +28,76 @@ Refer to Linux compile instructions and install required packages.
Additionally, install mingw-w64.
sudo apt-get install mingw-w64
sudo apt-get install mingw-w64 libz-mingw-w64-dev
2. Create a local library directory for packages to be compiled in the next
step. Suggested location is $HOME/usr/lib/
$ mkdir $HOME/usr/lib
3. Download and build other packages for mingw that don't have a mingw64
version available in the repositories.
Download the following source code packages from their respective and
respected download locations, copy them to ~/usr/lib/ and uncompress them.
openssl
curl
gmp
openssl: https://github.com/openssl/openssl/releases
In most cases the latest vesrion is ok but it's safest to download
the same major and minor version as included in your distribution.
curl: https://github.com/curl/curl/releases
Run the following commands or follow the supplied instructions.
Do not run "make install" unless you are using ~/usr/lib, which isn't
recommended.
gmp: https://gmplib.org/download/gmp/
Some instructions insist on running "make check". If make check fails
it may still work, YMMV.
In most cases the latest version is ok but it's safest to download the same major and minor version as included in your distribution. The following uses versions from Ubuntu 20.04. Change version numbers as required.
You can speed up "make" by using all CPU cores available with "-j n" where
n is the number of CPU threads you want to use.
Run the following commands or follow the supplied instructions. Do not run "make install" unless you are using /usr/lib, which isn't recommended.
Some instructions insist on running "make check". If make check fails it may still work, YMMV.
You can speed up "make" by using all CPU cores available with "-j n" where n is the number of CPU threads you want to use.
openssl:
./Configure mingw64 shared --cross-compile-prefix=x86_64-w64-mingw32
make
$ ./Configure mingw64 shared --cross-compile-prefix=x86_64-w64-mingw32-
$ make
Make may fail with an ld error, just ensure libcrypto-1_1-x64.dll is created.
curl:
./configure --with-winssl --with-winidn --host=x86_64-w64-mingw32
make
$ ./configure --with-winssl --with-winidn --host=x86_64-w64-mingw32
$ make
gmp:
./configure --host=x86_64-w64-mingw32
make
$ ./configure --host=x86_64-w64-mingw32
$ make
4. Tweak the environment.
This step is required everytime you login or the commands can be added to
.bashrc.
This step is required everytime you login or the commands can be added to .bashrc.
Define some local variables to point to local library.
Define some local variables to point to local library.
export LOCAL_LIB="$HOME/usr/lib"
$ export LOCAL_LIB="$HOME/usr/lib"
export LDFLAGS="-L$LOCAL_LIB/curl/lib/.libs -L$LOCAL_LIB/gmp/.libs -L$LOCAL_LIB/openssl"
$ export LDFLAGS="-L$LOCAL_LIB/curl/lib/.libs -L$LOCAL_LIB/gmp/.libs -L$LOCAL_LIB/openssl"
export CONFIGURE_ARGS="--with-curl=$LOCAL_LIB/curl --with-crypto=$LOCAL_LIB/openssl --host=x86_64-w64-mingw32"
$ export CONFIGURE_ARGS="--with-curl=$LOCAL_LIB/curl --with-crypto=$LOCAL_LIB/openssl --host=x86_64-w64-mingw32"
Create a release directory and copy some dll files previously built.
This can be done outside of cpuminer-opt and only needs to be done once.
If the release directory is in cpuminer-opt directory it needs to be
recreated every a source package is decompressed.
Adjust for gcc version:
mkdir release
cp /usr/x86_64-w64-mingw32/lib/zlib1.dll release/
cp /usr/x86_64-w64-mingw32/lib/libwinpthread-1.dll release/
cp /usr/lib/gcc/x86_64-w64-mingw32/7.3-win32/libstdc++-6.dll release/
cp /usr/lib/gcc/x86_64-w64-mingw32/7.3-win32/libgcc_s_seh-1.dll release/
cp $LOCAL_LIB/openssl/libcrypto-1_1-x64.dll release/
cp $LOCAL_LIB/curl/lib/.libs/libcurl-4.dll release/
$ export GCC_MINGW_LIB="/usr/lib/gcc/x86_64-w64-mingw32/9.3-win32"
Create a release directory and copy some dll files previously built. This can be done outside of cpuminer-opt and only needs to be done once. If the release directory is in cpuminer-opt directory it needs to be recreated every time a source package is decompressed.
$ mkdir release
$ cp /usr/x86_64-w64-mingw32/lib/zlib1.dll release/
$ cp /usr/x86_64-w64-mingw32/lib/libwinpthread-1.dll release/
$ cp $GCC_MINGW_LIB/libstdc++-6.dll release/
$ cp $GCC_MINGW_LIB/libgcc_s_seh-1.dll release/
$ cp $LOCAL_LIB/openssl/libcrypto-1_1-x64.dll release/
$ cp $LOCAL_LIB/curl/lib/.libs/libcurl-4.dll release/
The following steps need to be done every time a new source package is
opened.
@@ -110,13 +111,73 @@ https://github.com/JayDDee/cpuminer-opt/releases
Decompress and change to the cpuminer-opt directory.
6. Prepare to compile
6. compile
Create a link to the locally compiled version of gmp.h
ln -s $LOCAL_LIB/gmp-version/gmp.h ./gmp.h
$ ln -s $LOCAL_LIB/gmp-version/gmp.h ./gmp.h
$ ./autogen.sh
Configure the compiler for the CPU architecture of the host machine:
CFLAGS="-O3 -march=native -Wall" ./configure $CONFIGURE_ARGS
or cross compile for a specific CPU architecture:
CFLAGS="-O3 -march=znver1 -Wall" ./configure $CONFIGURE_ARGS
This will compile for AMD Ryzen.
You can compile more generically for a set of specific CPU features if you know what features you want:
CFLAGS="-O3 -maes -msse4.2 -Wall" ./configure $CONFIGURE_ARGS
This will compile for an older CPU that does not have AVX.
You can find several examples in README.txt
If you have a CPU with more than 64 threads and Windows 7 or higher you can enable the CPU Groups feature by adding the following to CFLAGS:
"-D_WIN32_WINNT=0x0601"
Once you have run configure successfully run the compiler with n CPU threads:
$ make -j n
Copy cpuminer.exe to the release directory, compress and copy the release directory to a Windows system and run cpuminer.exe from the command line.
Run cpuminer
In a command windows change directories to the unzipped release folder. to get a list of all options:
cpuminer.exe --help
Command options are specific to where you mine. Refer to the pool's instructions on how to set them.
Create a link to the locally compiled version of gmp.h
$ ln -s $LOCAL_LIB/gmp-version/gmp.h ./gmp.h
Edit configure.ac to fix lipthread package name.

View File

@@ -129,7 +129,7 @@ cpuminer_SOURCES = \
algo/lyra2/allium.c \
algo/lyra2/phi2-4way.c \
algo/lyra2/phi2.c \
algo//m7m/m7m.c \
algo/m7m/m7m.c \
algo/m7m/magimath.cpp \
algo/nist5/nist5-gate.c \
algo/nist5/nist5-4way.c \
@@ -192,6 +192,11 @@ cpuminer_SOURCES = \
algo/sm3/sm3-hash-4way.c \
algo/swifftx/swifftx.c \
algo/tiger/sph_tiger.c \
algo/verthash/verthash-gate.c \
algo/verthash/Verthash.c \
algo/verthash/fopen_utf8.c \
algo/verthash/tiny_sha3/sha3.c \
algo/verthash/tiny_sha3/sha3-4way.c \
algo/whirlpool/sph_whirlpool.c \
algo/whirlpool/whirlpool-hash-4way.c \
algo/whirlpool/whirlpool-gate.c \

View File

@@ -89,7 +89,7 @@ Supported Algorithms
lyra2h Hppcoin
lyra2re lyra2
lyra2rev2 lyra2v2
lyra2rev3 lyrav2v3, Vertcoin
lyra2rev3 lyrav2v3
lyra2z
lyra2z330 Lyra2 330 rows, Zoin (ZOI)
m7m Magi (XMG)
@@ -122,6 +122,7 @@ Supported Algorithms
tribus Denarius (DNR)
vanilla blake256r8vnl (VCash)
veltor (VLT)
verthash Vertcoin
whirlpool
whirlpoolx
x11 Dash
@@ -134,7 +135,7 @@ Supported Algorithms
x14 X14
x15 X15
x16r
x16rv2 Ravencoin (RVN)
x16rv2
x16rt Gincoin (GIN)
x16rt-veil Veil (VEIL)
x16s Pigeoncoin (PGN)

View File

@@ -59,7 +59,7 @@ Notes about included DLL files:
Downloading DLL files from alternative sources presents an inherent
security risk if their source is unknown. All DLL files included have
been copied from the Ubuntu-20.04 instalation or compiled by me from
been copied from the Ubuntu-20.04 installation or compiled by me from
source code obtained from the author's official repository. The exact
procedure is documented in the build instructions for Windows:
https://github.com/JayDDee/cpuminer-opt/wiki/Compiling-from-source

View File

@@ -65,6 +65,49 @@ If not what makes it happen or not happen?
Change Log
----------
v3.16.2
Verthash: midstate prehash optimization for all architectures.
Verthash: AVX2 optimization.
GBT: added support for Bech32 addresses, untested.
Linux: added CPU frequency to benchmark log.
Fixed integer overflow in time calculations.
v3.16.1
New options for verthash:
--data-file to specify the name, and optionally the path, of the verthash
data file, default is "verthash.dat" in the current directory.
--verify to perform the data file integrity check at startup, default is
not to verify data file integrity.
Support for creation of default verthash data file if:
1) --data-file option is not used,
2) no default data file is found in the current directory, and,
3) --verify option is used.
More detailed logs related to verthash data file.
Small verthash performance improvement.
Fixed detection of corrupt stats caused by networking issues.
v3.16.0
Added verthash algo.
v3.15.7
Added accepted/stale/rejected percentage to summary log report.
Added warning if share counters mismatch which could corrupt stats.
Linux: CPU temperature reporting is more responsive to rising temperature.
A few AVX2 & AVX512 tweaks.
Removed some dead code and other cleanup.
v3.15.6
Implement keccak pre-hash optimization for x16* algos.
Move conditional mining test to before get_new_work in miner thread.
Add test for share reject reason when solo mining.
Add support for floating point, as well as integer, "networkhasps" in
RPC getmininginfo method.
v3.15.5
Fix stratum jobs lost if 2 jobs received in less than one second.

View File

@@ -15,8 +15,6 @@
#include <stdbool.h>
#include <memory.h>
#include <unistd.h>
#include <openssl/sha.h>
//#include "miner.h"
#include "algo-gate-api.h"
// Define null and standard functions.
@@ -279,9 +277,11 @@ void init_algo_gate( algo_gate_t* gate )
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wimplicit-function-declaration"
// called by each thread that uses the gate
// Called once by main
bool register_algo_gate( int algo, algo_gate_t *gate )
{
bool rc = false;
if ( NULL == gate )
{
applog(LOG_ERR,"FAIL: algo_gate registration failed, NULL gate\n");
@@ -290,108 +290,108 @@ bool register_algo_gate( int algo, algo_gate_t *gate )
init_algo_gate( gate );
switch (algo)
switch ( algo )
{
case ALGO_ALLIUM: register_allium_algo ( gate ); break;
case ALGO_ANIME: register_anime_algo ( gate ); break;
case ALGO_ARGON2: register_argon2_algo ( gate ); break;
case ALGO_ARGON2D250: register_argon2d_crds_algo ( gate ); break;
case ALGO_ARGON2D500: register_argon2d_dyn_algo ( gate ); break;
case ALGO_ARGON2D4096: register_argon2d4096_algo ( gate ); break;
case ALGO_AXIOM: register_axiom_algo ( gate ); break;
case ALGO_BLAKE: register_blake_algo ( gate ); break;
case ALGO_BLAKE2B: register_blake2b_algo ( gate ); break;
case ALGO_BLAKE2S: register_blake2s_algo ( gate ); break;
case ALGO_BLAKECOIN: register_blakecoin_algo ( gate ); break;
case ALGO_BMW512: register_bmw512_algo ( gate ); break;
case ALGO_C11: register_c11_algo ( gate ); break;
case ALGO_DECRED: register_decred_algo ( gate ); break;
case ALGO_DEEP: register_deep_algo ( gate ); break;
case ALGO_DMD_GR: register_dmd_gr_algo ( gate ); break;
case ALGO_GROESTL: register_groestl_algo ( gate ); break;
case ALGO_HEX: register_hex_algo ( gate ); break;
case ALGO_HMQ1725: register_hmq1725_algo ( gate ); break;
case ALGO_HODL: register_hodl_algo ( gate ); break;
case ALGO_JHA: register_jha_algo ( gate ); break;
case ALGO_KECCAK: register_keccak_algo ( gate ); break;
case ALGO_KECCAKC: register_keccakc_algo ( gate ); break;
case ALGO_LBRY: register_lbry_algo ( gate ); break;
case ALGO_LYRA2H: register_lyra2h_algo ( gate ); break;
case ALGO_LYRA2RE: register_lyra2re_algo ( gate ); break;
case ALGO_LYRA2REV2: register_lyra2rev2_algo ( gate ); break;
case ALGO_LYRA2REV3: register_lyra2rev3_algo ( gate ); break;
case ALGO_LYRA2Z: register_lyra2z_algo ( gate ); break;
case ALGO_LYRA2Z330: register_lyra2z330_algo ( gate ); break;
case ALGO_M7M: register_m7m_algo ( gate ); break;
case ALGO_MINOTAUR: register_minotaur_algo ( gate ); break;
case ALGO_MYR_GR: register_myriad_algo ( gate ); break;
case ALGO_NEOSCRYPT: register_neoscrypt_algo ( gate ); break;
case ALGO_NIST5: register_nist5_algo ( gate ); break;
case ALGO_PENTABLAKE: register_pentablake_algo ( gate ); break;
case ALGO_PHI1612: register_phi1612_algo ( gate ); break;
case ALGO_PHI2: register_phi2_algo ( gate ); break;
case ALGO_POLYTIMOS: register_polytimos_algo ( gate ); break;
case ALGO_POWER2B: register_power2b_algo ( gate ); break;
case ALGO_QUARK: register_quark_algo ( gate ); break;
case ALGO_QUBIT: register_qubit_algo ( gate ); break;
case ALGO_SCRYPT: register_scrypt_algo ( gate ); break;
case ALGO_SHA256D: register_sha256d_algo ( gate ); break;
case ALGO_SHA256Q: register_sha256q_algo ( gate ); break;
case ALGO_SHA256T: register_sha256t_algo ( gate ); break;
case ALGO_SHA3D: register_sha3d_algo ( gate ); break;
case ALGO_SHAVITE3: register_shavite_algo ( gate ); break;
case ALGO_SKEIN: register_skein_algo ( gate ); break;
case ALGO_SKEIN2: register_skein2_algo ( gate ); break;
case ALGO_SKUNK: register_skunk_algo ( gate ); break;
case ALGO_SONOA: register_sonoa_algo ( gate ); break;
case ALGO_TIMETRAVEL: register_timetravel_algo ( gate ); break;
case ALGO_TIMETRAVEL10: register_timetravel10_algo ( gate ); break;
case ALGO_TRIBUS: register_tribus_algo ( gate ); break;
case ALGO_VANILLA: register_vanilla_algo ( gate ); break;
case ALGO_VELTOR: register_veltor_algo ( gate ); break;
case ALGO_WHIRLPOOL: register_whirlpool_algo ( gate ); break;
case ALGO_WHIRLPOOLX: register_whirlpoolx_algo ( gate ); break;
case ALGO_X11: register_x11_algo ( gate ); break;
case ALGO_X11EVO: register_x11evo_algo ( gate ); break;
case ALGO_X11GOST: register_x11gost_algo ( gate ); break;
case ALGO_X12: register_x12_algo ( gate ); break;
case ALGO_X13: register_x13_algo ( gate ); break;
case ALGO_X13BCD: register_x13bcd_algo ( gate ); break;
case ALGO_X13SM3: register_x13sm3_algo ( gate ); break;
case ALGO_X14: register_x14_algo ( gate ); break;
case ALGO_X15: register_x15_algo ( gate ); break;
case ALGO_X16R: register_x16r_algo ( gate ); break;
case ALGO_X16RV2: register_x16rv2_algo ( gate ); break;
case ALGO_X16RT: register_x16rt_algo ( gate ); break;
case ALGO_X16RT_VEIL: register_x16rt_veil_algo ( gate ); break;
case ALGO_X16S: register_x16s_algo ( gate ); break;
case ALGO_X17: register_x17_algo ( gate ); break;
case ALGO_X21S: register_x21s_algo ( gate ); break;
case ALGO_X22I: register_x22i_algo ( gate ); break;
case ALGO_X25X: register_x25x_algo ( gate ); break;
case ALGO_XEVAN: register_xevan_algo ( gate ); break;
case ALGO_YESCRYPT: register_yescrypt_05_algo ( gate ); break;
case ALGO_ALLIUM: rc = register_allium_algo ( gate ); break;
case ALGO_ANIME: rc = register_anime_algo ( gate ); break;
case ALGO_ARGON2: rc = register_argon2_algo ( gate ); break;
case ALGO_ARGON2D250: rc = register_argon2d_crds_algo ( gate ); break;
case ALGO_ARGON2D500: rc = register_argon2d_dyn_algo ( gate ); break;
case ALGO_ARGON2D4096: rc = register_argon2d4096_algo ( gate ); break;
case ALGO_AXIOM: rc = register_axiom_algo ( gate ); break;
case ALGO_BLAKE: rc = register_blake_algo ( gate ); break;
case ALGO_BLAKE2B: rc = register_blake2b_algo ( gate ); break;
case ALGO_BLAKE2S: rc = register_blake2s_algo ( gate ); break;
case ALGO_BLAKECOIN: rc = register_blakecoin_algo ( gate ); break;
case ALGO_BMW512: rc = register_bmw512_algo ( gate ); break;
case ALGO_C11: rc = register_c11_algo ( gate ); break;
case ALGO_DECRED: rc = register_decred_algo ( gate ); break;
case ALGO_DEEP: rc = register_deep_algo ( gate ); break;
case ALGO_DMD_GR: rc = register_dmd_gr_algo ( gate ); break;
case ALGO_GROESTL: rc = register_groestl_algo ( gate ); break;
case ALGO_HEX: rc = register_hex_algo ( gate ); break;
case ALGO_HMQ1725: rc = register_hmq1725_algo ( gate ); break;
case ALGO_HODL: rc = register_hodl_algo ( gate ); break;
case ALGO_JHA: rc = register_jha_algo ( gate ); break;
case ALGO_KECCAK: rc = register_keccak_algo ( gate ); break;
case ALGO_KECCAKC: rc = register_keccakc_algo ( gate ); break;
case ALGO_LBRY: rc = register_lbry_algo ( gate ); break;
case ALGO_LYRA2H: rc = register_lyra2h_algo ( gate ); break;
case ALGO_LYRA2RE: rc = register_lyra2re_algo ( gate ); break;
case ALGO_LYRA2REV2: rc = register_lyra2rev2_algo ( gate ); break;
case ALGO_LYRA2REV3: rc = register_lyra2rev3_algo ( gate ); break;
case ALGO_LYRA2Z: rc = register_lyra2z_algo ( gate ); break;
case ALGO_LYRA2Z330: rc = register_lyra2z330_algo ( gate ); break;
case ALGO_M7M: rc = register_m7m_algo ( gate ); break;
case ALGO_MINOTAUR: rc = register_minotaur_algo ( gate ); break;
case ALGO_MYR_GR: rc = register_myriad_algo ( gate ); break;
case ALGO_NEOSCRYPT: rc = register_neoscrypt_algo ( gate ); break;
case ALGO_NIST5: rc = register_nist5_algo ( gate ); break;
case ALGO_PENTABLAKE: rc = register_pentablake_algo ( gate ); break;
case ALGO_PHI1612: rc = register_phi1612_algo ( gate ); break;
case ALGO_PHI2: rc = register_phi2_algo ( gate ); break;
case ALGO_POLYTIMOS: rc = register_polytimos_algo ( gate ); break;
case ALGO_POWER2B: rc = register_power2b_algo ( gate ); break;
case ALGO_QUARK: rc = register_quark_algo ( gate ); break;
case ALGO_QUBIT: rc = register_qubit_algo ( gate ); break;
case ALGO_SCRYPT: rc = register_scrypt_algo ( gate ); break;
case ALGO_SHA256D: rc = register_sha256d_algo ( gate ); break;
case ALGO_SHA256Q: rc = register_sha256q_algo ( gate ); break;
case ALGO_SHA256T: rc = register_sha256t_algo ( gate ); break;
case ALGO_SHA3D: rc = register_sha3d_algo ( gate ); break;
case ALGO_SHAVITE3: rc = register_shavite_algo ( gate ); break;
case ALGO_SKEIN: rc = register_skein_algo ( gate ); break;
case ALGO_SKEIN2: rc = register_skein2_algo ( gate ); break;
case ALGO_SKUNK: rc = register_skunk_algo ( gate ); break;
case ALGO_SONOA: rc = register_sonoa_algo ( gate ); break;
case ALGO_TIMETRAVEL: rc = register_timetravel_algo ( gate ); break;
case ALGO_TIMETRAVEL10: rc = register_timetravel10_algo ( gate ); break;
case ALGO_TRIBUS: rc = register_tribus_algo ( gate ); break;
case ALGO_VANILLA: rc = register_vanilla_algo ( gate ); break;
case ALGO_VELTOR: rc = register_veltor_algo ( gate ); break;
case ALGO_VERTHASH: rc = register_verthash_algo ( gate ); break;
case ALGO_WHIRLPOOL: rc = register_whirlpool_algo ( gate ); break;
case ALGO_WHIRLPOOLX: rc = register_whirlpoolx_algo ( gate ); break;
case ALGO_X11: rc = register_x11_algo ( gate ); break;
case ALGO_X11EVO: rc = register_x11evo_algo ( gate ); break;
case ALGO_X11GOST: rc = register_x11gost_algo ( gate ); break;
case ALGO_X12: rc = register_x12_algo ( gate ); break;
case ALGO_X13: rc = register_x13_algo ( gate ); break;
case ALGO_X13BCD: rc = register_x13bcd_algo ( gate ); break;
case ALGO_X13SM3: rc = register_x13sm3_algo ( gate ); break;
case ALGO_X14: rc = register_x14_algo ( gate ); break;
case ALGO_X15: rc = register_x15_algo ( gate ); break;
case ALGO_X16R: rc = register_x16r_algo ( gate ); break;
case ALGO_X16RV2: rc = register_x16rv2_algo ( gate ); break;
case ALGO_X16RT: rc = register_x16rt_algo ( gate ); break;
case ALGO_X16RT_VEIL: rc = register_x16rt_veil_algo ( gate ); break;
case ALGO_X16S: rc = register_x16s_algo ( gate ); break;
case ALGO_X17: rc = register_x17_algo ( gate ); break;
case ALGO_X21S: rc = register_x21s_algo ( gate ); break;
case ALGO_X22I: rc = register_x22i_algo ( gate ); break;
case ALGO_X25X: rc = register_x25x_algo ( gate ); break;
case ALGO_XEVAN: rc = register_xevan_algo ( gate ); break;
case ALGO_YESCRYPT: rc = register_yescrypt_05_algo ( gate ); break;
// case ALGO_YESCRYPT: register_yescrypt_algo ( gate ); break;
case ALGO_YESCRYPTR8: register_yescryptr8_05_algo ( gate ); break;
case ALGO_YESCRYPTR8: rc = register_yescryptr8_05_algo ( gate ); break;
// case ALGO_YESCRYPTR8: register_yescryptr8_algo ( gate ); break;
case ALGO_YESCRYPTR8G: register_yescryptr8g_algo ( gate ); break;
case ALGO_YESCRYPTR16: register_yescryptr16_05_algo( gate ); break;
case ALGO_YESCRYPTR8G: rc = register_yescryptr8g_algo ( gate ); break;
case ALGO_YESCRYPTR16: rc = register_yescryptr16_05_algo( gate ); break;
// case ALGO_YESCRYPTR16: register_yescryptr16_algo ( gate ); break;
case ALGO_YESCRYPTR32: register_yescryptr32_05_algo( gate ); break;
case ALGO_YESCRYPTR32: rc = register_yescryptr32_05_algo( gate ); break;
// case ALGO_YESCRYPTR32: register_yescryptr32_algo ( gate ); break;
case ALGO_YESPOWER: register_yespower_algo ( gate ); break;
case ALGO_YESPOWERR16: register_yespowerr16_algo ( gate ); break;
case ALGO_YESPOWER_B2B: register_yespower_b2b_algo ( gate ); break;
case ALGO_ZR5: register_zr5_algo ( gate ); break;
case ALGO_YESPOWER: rc = register_yespower_algo ( gate ); break;
case ALGO_YESPOWERR16: rc = register_yespowerr16_algo ( gate ); break;
case ALGO_YESPOWER_B2B: rc = register_yespower_b2b_algo ( gate ); break;
case ALGO_ZR5: rc = register_zr5_algo ( gate ); break;
default:
applog(LOG_ERR,"FAIL: algo_gate registration failed, unknown algo %s.\n", algo_names[opt_algo] );
applog(LOG_ERR,"BUG: unregistered algorithm %s.\n", algo_names[opt_algo] );
return false;
} // switch
// ensure required functions were defined.
if ( gate->scanhash == (void*)&null_scanhash )
if ( !rc )
{
applog(LOG_ERR, "FAIL: Required algo_gate functions undefined\n");
applog(LOG_ERR, "FAIL: %s algorithm failed to initialize\n", algo_names[opt_algo] );
return false;
}
return true;
@@ -419,7 +419,6 @@ void exec_hash_function( int algo, void *output, const void *pdata )
const char* const algo_alias_map[][2] =
{
// alias proper
{ "argon2d-crds", "argon2d250" },
{ "argon2d-dyn", "argon2d500" },
{ "argon2d-uis", "argon2d4096" },
{ "bcd", "x13bcd" },
@@ -434,7 +433,6 @@ const char* const algo_alias_map[][2] =
{ "flax", "c11" },
{ "hsr", "x13sm3" },
{ "jackpot", "jha" },
{ "jane", "scryptjane" },
{ "lyra2", "lyra2re" },
{ "lyra2v2", "lyra2rev2" },
{ "lyra2v3", "lyra2rev3" },

View File

@@ -114,15 +114,15 @@ typedef struct
// Mandatory functions, one of these is mandatory. If a generic scanhash
// is used a custom target hash function must be registered, with a custom
// scanhash the target hash function can be called directly and doesn't need
// to be registered in the gate.
// to be registered with the gate.
int ( *scanhash ) ( struct work*, uint32_t, uint64_t*, struct thr_info* );
int ( *hash ) ( void*, const void*, int );
//optional, safe to use default in most cases
// Allocate thread local buffers and other initialization specific to miner
// threads.
// Called once by each miner thread to allocate thread local buffers and
// other initialization specific to miner threads.
bool ( *miner_thread_init ) ( int );
// Get thread local copy of blockheader with unique nonce.
@@ -150,7 +150,7 @@ void ( *build_stratum_request ) ( char*, struct work*, struct stratum_ctx* );
char* ( *malloc_txs_request ) ( struct work* );
// Big or little
// Big endian or little endian
void ( *set_work_data_endian ) ( struct work* );
double ( *calc_network_diff ) ( struct work* );
@@ -260,7 +260,7 @@ int scanhash_8way_64in_32out( struct work *work, uint32_t max_nonce,
#endif
// displays warning
int null_hash ();
int null_hash();
// optional safe targets, default listed first unless noted.
@@ -281,7 +281,7 @@ void std_be_build_stratum_request( char *req, struct work *work );
char* std_malloc_txs_request( struct work *work );
// Default is do_nothing (assumed LE)
// Default is do_nothing, little endian is assumed
void set_work_data_big_endian( struct work *work );
double std_calc_network_diff( struct work *work );

View File

@@ -55,8 +55,8 @@ MYALIGN const unsigned int mul2ipt[] = {0x728efc00, 0x6894e61a, 0x3fc3b14d, 0x2
#define ECHO_SUBBYTES(state, i, j) \
state[i][j] = _mm_aesenc_si128(state[i][j], k1);\
state[i][j] = _mm_aesenc_si128(state[i][j], M128(zero));\
k1 = _mm_add_epi32(k1, M128(const1))
k1 = _mm_add_epi32(k1, M128(const1));\
state[i][j] = _mm_aesenc_si128(state[i][j], M128(zero))
#define ECHO_MIXBYTES(state1, state2, j, t1, t2, s2) \
s2 = _mm_add_epi8(state1[0][j], state1[0][j]);\

View File

@@ -10,22 +10,20 @@ static const unsigned int mul2ipt[] __attribute__ ((aligned (64))) =
0xfd5ba600, 0x2a8c71d7, 0x1eb845e3, 0xc96f9234
};
*/
// do these need to be reversed?
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
#define mul2mask \
m512_const2_64( 0, 0x00001b00 )
//#define mul2mask m512_const2_64( 0, 0x00001b00 )
//_mm512_set4_epi32( 0, 0, 0, 0x00001b00 )
// _mm512_set4_epi32( 0x00001b00, 0, 0, 0 )
//_mm512_set4_epi32( 0x00001b00, 0, 0, 0 )
#define lsbmask m512_const1_32( 0x01010101 )
//#define lsbmask m512_const1_32( 0x01010101 )
#define ECHO_SUBBYTES( state, i, j ) \
state[i][j] = _mm512_aesenc_epi128( state[i][j], k1 ); \
state[i][j] = _mm512_aesenc_epi128( state[i][j], m512_zero ); \
k1 = _mm512_add_epi32( k1, m512_one_128 );
k1 = _mm512_add_epi32( k1, one ); \
state[i][j] = _mm512_aesenc_epi128( state[i][j], m512_zero );
#define ECHO_MIXBYTES( state1, state2, j, t1, t2, s2 ) do \
{ \
@@ -140,6 +138,9 @@ void echo_4way_compress( echo_4way_context *ctx, const __m512i *pmsg,
unsigned int r, b, i, j;
__m512i t1, t2, s2, k1;
__m512i _state[4][4], _state2[4][4], _statebackup[4][4];
__m512i one = m512_one_128;
__m512i mul2mask = m512_const2_64( 0, 0x00001b00 );
__m512i lsbmask = m512_const1_32( 0x01010101 );
_state[ 0 ][ 0 ] = ctx->state[ 0 ][ 0 ];
_state[ 0 ][ 1 ] = ctx->state[ 0 ][ 1 ];
@@ -406,8 +407,8 @@ int echo_4way_full( echo_4way_context *ctx, void *hashval, int nHashSize,
#define ECHO_SUBBYTES_2WAY( state, i, j ) \
state[i][j] = _mm256_aesenc_epi128( state[i][j], k1 ); \
k1 = _mm256_add_epi32( k1, m256_one_128 ); \
state[i][j] = _mm256_aesenc_epi128( state[i][j], m256_zero ); \
k1 = _mm256_add_epi32( k1, m256_one_128 );
#define ECHO_MIXBYTES_2WAY( state1, state2, j, t1, t2, s2 ) do \
{ \

View File

@@ -14,7 +14,11 @@
#ifndef FUGUE_HASH_API_H
#define FUGUE_HASH_API_H
#if defined(__AES__)
#if defined(__AES__)
#if !defined(__SSE4_1__)
#error "Unsupported configuration, AES needs SSE4.1. Compile without AES."
#endif
#include "algo/sha/sha3_common.h"
#include "simd-utils.h"

View File

@@ -51,7 +51,7 @@ int groestl256_4way_full( groestl256_4way_context* ctx, void* output,
const int hashlen_m128i = 32 >> 4; // bytes to __m128i
const int hash_offset = SIZE256 - hashlen_m128i;
int rem = ctx->rem_ptr;
int blocks = len / SIZE256;
uint64_t blocks = len / SIZE256;
__m512i* in = (__m512i*)input;
int i;
@@ -89,21 +89,21 @@ int groestl256_4way_full( groestl256_4way_context* ctx, void* output,
if ( i == SIZE256 - 1 )
{
// only 1 vector left in buffer, all padding at once
ctx->buffer[i] = m512_const2_64( (uint64_t)blocks << 56, 0x80 );
ctx->buffer[i] = m512_const2_64( blocks << 56, 0x80 );
}
else
{
// add first padding
ctx->buffer[i] = m512_const4_64( 0, 0x80, 0, 0x80 );
ctx->buffer[i] = m512_const2_64( 0, 0x80 );
// add zero padding
for ( i += 1; i < SIZE256 - 1; i++ )
ctx->buffer[i] = m512_zero;
// add length padding, second last byte is zero unless blocks > 255
ctx->buffer[i] = m512_const2_64( (uint64_t)blocks << 56, 0 );
ctx->buffer[i] = m512_const2_64( blocks << 56, 0 );
}
// digest final padding block and do output transform
// digest final padding block and do output transform
TF512_4way( ctx->chaining, ctx->buffer );
OF512_4way( ctx->chaining );
@@ -122,7 +122,7 @@ int groestl256_4way_update_close( groestl256_4way_context* ctx, void* output,
const int hashlen_m128i = ctx->hashlen / 16; // bytes to __m128i
const int hash_offset = SIZE256 - hashlen_m128i;
int rem = ctx->rem_ptr;
int blocks = len / SIZE256;
uint64_t blocks = len / SIZE256;
__m512i* in = (__m512i*)input;
int i;
@@ -146,20 +146,18 @@ int groestl256_4way_update_close( groestl256_4way_context* ctx, void* output,
if ( i == SIZE256 - 1 )
{
// only 1 vector left in buffer, all padding at once
ctx->buffer[i] = m512_const1_128( _mm_set_epi8(
blocks, blocks>>8,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0x80 ) );
ctx->buffer[i] = m512_const2_64( blocks << 56, 0x80 );
}
else
{
// add first padding
ctx->buffer[i] = m512_const4_64( 0, 0x80, 0, 0x80 );
ctx->buffer[i] = m512_const2_64( 0, 0x80 );
// add zero padding
for ( i += 1; i < SIZE256 - 1; i++ )
ctx->buffer[i] = m512_zero;
// add length padding, second last byte is zero unless blocks > 255
ctx->buffer[i] = m512_const1_128( _mm_set_epi8(
blocks, blocks>>8, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0 ) );
ctx->buffer[i] = m512_const2_64( blocks << 56, 0 );
}
// digest final padding block and do output transform
@@ -209,23 +207,23 @@ int groestl256_2way_full( groestl256_2way_context* ctx, void* output,
const int hashlen_m128i = 32 >> 4; // bytes to __m128i
const int hash_offset = SIZE256 - hashlen_m128i;
int rem = ctx->rem_ptr;
int blocks = len / SIZE256;
uint64_t blocks = len / SIZE256;
__m256i* in = (__m256i*)input;
int i;
if (ctx->chaining == NULL || ctx->buffer == NULL)
return 1;
if (ctx->chaining == NULL || ctx->buffer == NULL)
return 1;
for ( i = 0; i < SIZE256; i++ )
{
for ( i = 0; i < SIZE256; i++ )
{
ctx->chaining[i] = m256_zero;
ctx->buffer[i] = m256_zero;
}
}
// The only non-zero in the IV is len. It can be hard coded.
ctx->chaining[ 3 ] = m256_const2_64( 0, 0x0100000000000000 );
ctx->buf_ptr = 0;
ctx->rem_ptr = 0;
// The only non-zero in the IV is len. It can be hard coded.
ctx->chaining[ 3 ] = m256_const2_64( 0, 0x0100000000000000 );
ctx->buf_ptr = 0;
ctx->rem_ptr = 0;
// --- update ---
@@ -247,7 +245,7 @@ int groestl256_2way_full( groestl256_2way_context* ctx, void* output,
if ( i == SIZE256 - 1 )
{
// only 1 vector left in buffer, all padding at once
ctx->buffer[i] = m256_const2_64( (uint64_t)blocks << 56, 0x80 );
ctx->buffer[i] = m256_const2_64( blocks << 56, 0x80 );
}
else
{
@@ -258,10 +256,10 @@ int groestl256_2way_full( groestl256_2way_context* ctx, void* output,
ctx->buffer[i] = m256_zero;
// add length padding, second last byte is zero unless blocks > 255
ctx->buffer[i] = m256_const2_64( (uint64_t)blocks << 56, 0 );
ctx->buffer[i] = m256_const2_64( blocks << 56, 0 );
}
// digest final padding block and do output transform
// digest final padding block and do output transform
TF512_2way( ctx->chaining, ctx->buffer );
OF512_2way( ctx->chaining );
@@ -279,7 +277,7 @@ int groestl256_2way_update_close( groestl256_2way_context* ctx, void* output,
const int hashlen_m128i = ctx->hashlen / 16; // bytes to __m128i
const int hash_offset = SIZE256 - hashlen_m128i;
int rem = ctx->rem_ptr;
int blocks = len / SIZE256;
uint64_t blocks = len / SIZE256;
__m256i* in = (__m256i*)input;
int i;
@@ -303,8 +301,7 @@ int groestl256_2way_update_close( groestl256_2way_context* ctx, void* output,
if ( i == SIZE256 - 1 )
{
// only 1 vector left in buffer, all padding at once
ctx->buffer[i] = m256_const1_128( _mm_set_epi8(
blocks, blocks>>8,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0x80 ) );
ctx->buffer[i] = m256_const2_64( blocks << 56, 0x80 );
}
else
{
@@ -315,8 +312,7 @@ int groestl256_2way_update_close( groestl256_2way_context* ctx, void* output,
ctx->buffer[i] = m256_zero;
// add length padding, second last byte is zero unless blocks > 255
ctx->buffer[i] = m256_const1_128( _mm_set_epi8(
blocks, blocks>>8, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0 ) );
ctx->buffer[i] = m256_const2_64( blocks << 56, 0 );
}
// digest final padding block and do output transform

View File

@@ -43,7 +43,7 @@ int groestl512_4way_update_close( groestl512_4way_context* ctx, void* output,
const int hashlen_m128i = 64 / 16; // bytes to __m128i
const int hash_offset = SIZE512 - hashlen_m128i;
int rem = ctx->rem_ptr;
int blocks = len / SIZE512;
uint64_t blocks = len / SIZE512;
__m512i* in = (__m512i*)input;
int i;
@@ -64,16 +64,14 @@ int groestl512_4way_update_close( groestl512_4way_context* ctx, void* output,
if ( i == SIZE512 - 1 )
{
// only 1 vector left in buffer, all padding at once
ctx->buffer[i] = m512_const1_128( _mm_set_epi8(
blocks, blocks>>8,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0x80 ) );
ctx->buffer[i] = m512_const2_64( blocks << 56, 0x80 );
}
else
{
ctx->buffer[i] = m512_const4_64( 0, 0x80, 0, 0x80 );
ctx->buffer[i] = m512_const2_64( 0, 0x80 );
for ( i += 1; i < SIZE512 - 1; i++ )
ctx->buffer[i] = m512_zero;
ctx->buffer[i] = m512_const1_128( _mm_set_epi8(
blocks, blocks>>8, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0 ) );
ctx->buffer[i] = m512_const2_64( blocks << 56, 0 );
}
TF1024_4way( ctx->chaining, ctx->buffer );
@@ -124,7 +122,7 @@ int groestl512_4way_full( groestl512_4way_context* ctx, void* output,
}
else
{
ctx->buffer[i] = m512_const4_64( 0, 0x80, 0, 0x80 );
ctx->buffer[i] = m512_const2_64( 0, 0x80 );
for ( i += 1; i < SIZE512 - 1; i++ )
ctx->buffer[i] = m512_zero;
ctx->buffer[i] = m512_const2_64( blocks << 56, 0 );
@@ -168,7 +166,7 @@ int groestl512_2way_update_close( groestl512_2way_context* ctx, void* output,
const int hashlen_m128i = 64 / 16; // bytes to __m128i
const int hash_offset = SIZE512 - hashlen_m128i;
int rem = ctx->rem_ptr;
int blocks = len / SIZE512;
uint64_t blocks = len / SIZE512;
__m256i* in = (__m256i*)input;
int i;
@@ -189,16 +187,14 @@ int groestl512_2way_update_close( groestl512_2way_context* ctx, void* output,
if ( i == SIZE512 - 1 )
{
// only 1 vector left in buffer, all padding at once
ctx->buffer[i] = m256_const1_128( _mm_set_epi8(
blocks, blocks>>8,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0x80 ) );
ctx->buffer[i] = m256_const2_64( blocks << 56, 0x80 );
}
else
{
ctx->buffer[i] = m256_const2_64( 0, 0x80 );
for ( i += 1; i < SIZE512 - 1; i++ )
ctx->buffer[i] = m256_zero;
ctx->buffer[i] = m256_const1_128( _mm_set_epi8(
blocks, blocks>>8, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0, 0,0 ) );
ctx->buffer[i] = m256_const2_64( blocks << 56, 0 );
}
TF1024_2way( ctx->chaining, ctx->buffer );

View File

@@ -548,7 +548,7 @@ static const sph_u32 T512[64][16] = {
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
// Hamsi 8 way
// Hamsi 8 way AVX512
#define INPUT_BIG8 \
do { \
@@ -849,13 +849,11 @@ void hamsi512_8way_update( hamsi_8way_big_context *sc, const void *data,
void hamsi512_8way_close( hamsi_8way_big_context *sc, void *dst )
{
__m512i pad[1];
int ch, cl;
uint32_t ch, cl;
sph_enc32be( &ch, sc->count_high );
sph_enc32be( &cl, sc->count_low + ( sc->partial_len << 3 ) );
pad[0] = _mm512_set_epi32( cl, ch, cl, ch, cl, ch, cl, ch,
cl, ch, cl, ch, cl, ch, cl, ch );
// pad[0] = m512_const2_32( cl, ch );
pad[0] = _mm512_set1_epi64( ((uint64_t)cl << 32 ) | (uint64_t)ch );
sc->buf[0] = m512_const1_64( 0x80 );
hamsi_8way_big( sc, sc->buf, 1 );
hamsi_8way_big_final( sc, pad );
@@ -863,11 +861,9 @@ void hamsi512_8way_close( hamsi_8way_big_context *sc, void *dst )
mm512_block_bswap_32( (__m512i*)dst, sc->h );
}
#endif // AVX512
// Hamsi 4 way
// Hamsi 4 way AVX2
#define INPUT_BIG \
do { \
@@ -1186,14 +1182,12 @@ void hamsi512_4way_update( hamsi_4way_big_context *sc, const void *data,
void hamsi512_4way_close( hamsi_4way_big_context *sc, void *dst )
{
__m256i pad[1];
int ch, cl;
uint32_t ch, cl;
sph_enc32be( &ch, sc->count_high );
sph_enc32be( &cl, sc->count_low + ( sc->partial_len << 3 ) );
pad[0] = _mm256_set_epi32( cl, ch, cl, ch, cl, ch, cl, ch );
pad[0] = _mm256_set1_epi64x( ((uint64_t)cl << 32 ) | (uint64_t)ch );
sc->buf[0] = m256_const1_64( 0x80 );
// sc->buf[0] = _mm256_set_epi32( 0UL, 0x80UL, 0UL, 0x80UL,
// 0UL, 0x80UL, 0UL, 0x80UL );
hamsi_big( sc, sc->buf, 1 );
hamsi_big_final( sc, pad );

View File

@@ -134,65 +134,47 @@
do { \
DECL64(c0); \
DECL64(c1); \
DECL64(c2); \
DECL64(c3); \
DECL64(c4); \
DECL64(bnn); \
NOT64(bnn, b20); \
KHI_XO(c0, b00, b10, b20); \
KHI_XO(c1, b10, bnn, b30); \
KHI_XA(c2, b20, b30, b40); \
KHI_XO(c3, b30, b40, b00); \
KHI_XA(c4, b40, b00, b10); \
KHI_XA(b20, b20, b30, b40); \
KHI_XO(b30, b30, b40, b00); \
KHI_XA(b40, b40, b00, b10); \
MOV64(b00, c0); \
MOV64(b10, c1); \
MOV64(b20, c2); \
MOV64(b30, c3); \
MOV64(b40, c4); \
NOT64(bnn, b41); \
KHI_XO(c0, b01, b11, b21); \
KHI_XA(c1, b11, b21, b31); \
KHI_XO(c2, b21, b31, bnn); \
KHI_XO(c3, b31, b41, b01); \
KHI_XA(c4, b41, b01, b11); \
KHI_XO(b21, b21, b31, bnn); \
KHI_XO(b31, b31, b41, b01); \
KHI_XA(b41, b41, b01, b11); \
MOV64(b01, c0); \
MOV64(b11, c1); \
MOV64(b21, c2); \
MOV64(b31, c3); \
MOV64(b41, c4); \
NOT64(bnn, b32); \
KHI_XO(c0, b02, b12, b22); \
KHI_XA(c1, b12, b22, b32); \
KHI_XA(c2, b22, bnn, b42); \
KHI_XO(c3, bnn, b42, b02); \
KHI_XA(c4, b42, b02, b12); \
KHI_XA(b22, b22, bnn, b42); \
KHI_XO(b32, bnn, b42, b02); \
KHI_XA(b42, b42, b02, b12); \
MOV64(b02, c0); \
MOV64(b12, c1); \
MOV64(b22, c2); \
MOV64(b32, c3); \
MOV64(b42, c4); \
NOT64(bnn, b33); \
KHI_XA(c0, b03, b13, b23); \
KHI_XO(c1, b13, b23, b33); \
KHI_XO(c2, b23, bnn, b43); \
KHI_XA(c3, bnn, b43, b03); \
KHI_XO(c4, b43, b03, b13); \
KHI_XO(b23, b23, bnn, b43); \
KHI_XA(b33, bnn, b43, b03); \
KHI_XO(b43, b43, b03, b13); \
MOV64(b03, c0); \
MOV64(b13, c1); \
MOV64(b23, c2); \
MOV64(b33, c3); \
MOV64(b43, c4); \
NOT64(bnn, b14); \
KHI_XA(c0, b04, bnn, b24); \
KHI_XO(c1, bnn, b24, b34); \
KHI_XA(c2, b24, b34, b44); \
KHI_XO(c3, b34, b44, b04); \
KHI_XA(c4, b44, b04, b14); \
KHI_XA(b24, b24, b34, b44); \
KHI_XO(b34, b34, b44, b04); \
KHI_XA(b44, b44, b04, b14); \
MOV64(b04, c0); \
MOV64(b14, c1); \
MOV64(b24, c2); \
MOV64(b34, c3); \
MOV64(b44, c4); \
} while (0)
#ifdef IOTA
@@ -201,6 +183,7 @@
#define IOTA(r) XOR64_IOTA(a00, a00, r)
#ifdef P0
#undef P0
#undef P1
#undef P2
#undef P3

View File

@@ -66,6 +66,17 @@ static const uint32 CNS_INIT[128] __attribute((aligned(64))) = {
a = _mm512_xor_si512(a,c0);\
b = _mm512_xor_si512(b,c1);
#define MULT24W( a0, a1 ) \
do { \
__m512i b = _mm512_xor_si512( a0, \
_mm512_maskz_shuffle_epi32( 0xbbbb, a1, 16 ) ); \
a0 = _mm512_or_si512( _mm512_bsrli_epi128( b, 4 ), \
_mm512_bslli_epi128( a1,12 ) ); \
a1 = _mm512_or_si512( _mm512_bsrli_epi128( a1, 4 ), \
_mm512_bslli_epi128( b,12 ) ); \
} while(0)
/*
#define MULT24W( a0, a1, mask ) \
do { \
__m512i b = _mm512_xor_si512( a0, \
@@ -73,6 +84,7 @@ do { \
a0 = _mm512_or_si512( _mm512_bsrli_epi128(b,4), _mm512_bslli_epi128(a1,12) );\
a1 = _mm512_or_si512( _mm512_bsrli_epi128(a1,4), _mm512_bslli_epi128(b,12) );\
} while(0)
*/
// confirm pointer arithmetic
// ok but use array indexes
@@ -235,7 +247,6 @@ void rnd512_4way( luffa_4way_context *state, __m512i *msg )
__m512i msg0, msg1;
__m512i tmp[2];
__m512i x[8];
const __m512i MASK = m512_const2_64( 0, 0x00000000ffffffff );
t0 = chainv[0];
t1 = chainv[1];
@@ -249,7 +260,7 @@ void rnd512_4way( luffa_4way_context *state, __m512i *msg )
t0 = _mm512_xor_si512( t0, chainv[8] );
t1 = _mm512_xor_si512( t1, chainv[9] );
MULT24W( t0, t1, MASK );
MULT24W( t0, t1 );
msg0 = _mm512_shuffle_epi32( msg[0], 27 );
msg1 = _mm512_shuffle_epi32( msg[1], 27 );
@@ -268,68 +279,67 @@ void rnd512_4way( luffa_4way_context *state, __m512i *msg )
t0 = chainv[0];
t1 = chainv[1];
MULT24W( chainv[0], chainv[1], MASK );
MULT24W( chainv[0], chainv[1] );
chainv[0] = _mm512_xor_si512( chainv[0], chainv[2] );
chainv[1] = _mm512_xor_si512( chainv[1], chainv[3] );
MULT24W( chainv[2], chainv[3], MASK );
MULT24W( chainv[2], chainv[3] );
chainv[2] = _mm512_xor_si512(chainv[2], chainv[4]);
chainv[3] = _mm512_xor_si512(chainv[3], chainv[5]);
MULT24W( chainv[4], chainv[5], MASK );
MULT24W( chainv[4], chainv[5] );
chainv[4] = _mm512_xor_si512(chainv[4], chainv[6]);
chainv[5] = _mm512_xor_si512(chainv[5], chainv[7]);
MULT24W( chainv[6], chainv[7], MASK );
MULT24W( chainv[6], chainv[7] );
chainv[6] = _mm512_xor_si512(chainv[6], chainv[8]);
chainv[7] = _mm512_xor_si512(chainv[7], chainv[9]);
MULT24W( chainv[8], chainv[9], MASK );
MULT24W( chainv[8], chainv[9] );
chainv[8] = _mm512_xor_si512( chainv[8], t0 );
chainv[9] = _mm512_xor_si512( chainv[9], t1 );
t0 = chainv[8];
t1 = chainv[9];
MULT24W( chainv[8], chainv[9], MASK );
MULT24W( chainv[8], chainv[9] );
chainv[8] = _mm512_xor_si512( chainv[8], chainv[6] );
chainv[9] = _mm512_xor_si512( chainv[9], chainv[7] );
MULT24W( chainv[6], chainv[7], MASK );
MULT24W( chainv[6], chainv[7] );
chainv[6] = _mm512_xor_si512( chainv[6], chainv[4] );
chainv[7] = _mm512_xor_si512( chainv[7], chainv[5] );
MULT24W( chainv[4], chainv[5], MASK );
MULT24W( chainv[4], chainv[5] );
chainv[4] = _mm512_xor_si512( chainv[4], chainv[2] );
chainv[5] = _mm512_xor_si512( chainv[5], chainv[3] );
MULT24W( chainv[2], chainv[3], MASK );
MULT24W( chainv[2], chainv[3] );
chainv[2] = _mm512_xor_si512( chainv[2], chainv[0] );
chainv[3] = _mm512_xor_si512( chainv[3], chainv[1] );
MULT24W( chainv[0], chainv[1], MASK );
MULT24W( chainv[0], chainv[1] );
chainv[0] = _mm512_xor_si512( _mm512_xor_si512( chainv[0], t0 ), msg0 );
chainv[1] = _mm512_xor_si512( _mm512_xor_si512( chainv[1], t1 ), msg1 );
MULT24W( msg0, msg1, MASK );
MULT24W( msg0, msg1 );
chainv[2] = _mm512_xor_si512( chainv[2], msg0 );
chainv[3] = _mm512_xor_si512( chainv[3], msg1 );
MULT24W( msg0, msg1, MASK );
MULT24W( msg0, msg1 );
chainv[4] = _mm512_xor_si512( chainv[4], msg0 );
chainv[5] = _mm512_xor_si512( chainv[5], msg1 );
MULT24W( msg0, msg1, MASK );
MULT24W( msg0, msg1 );
chainv[6] = _mm512_xor_si512( chainv[6], msg0 );
chainv[7] = _mm512_xor_si512( chainv[7], msg1 );
MULT24W( msg0, msg1, MASK );
MULT24W( msg0, msg1);
chainv[8] = _mm512_xor_si512( chainv[8], msg0 );
chainv[9] = _mm512_xor_si512( chainv[9], msg1 );
MULT24W( msg0, msg1, MASK );
MULT24W( msg0, msg1 );
// replace with ror
chainv[3] = _mm512_rol_epi32( chainv[3], 1 );
chainv[5] = _mm512_rol_epi32( chainv[5], 2 );
chainv[7] = _mm512_rol_epi32( chainv[7], 3 );
@@ -496,7 +506,7 @@ int luffa_4way_update( luffa_4way_context *state, const void *data,
{
// remaining data bytes
buffer[0] = _mm512_shuffle_epi8( vdata[0], shuff_bswap32 );
buffer[1] = m512_const2_64( 0, 0x0000000080000000 );
buffer[1] = m512_const1_i128( 0x0000000080000000 );
}
return 0;
}
@@ -520,7 +530,7 @@ int luffa_4way_close( luffa_4way_context *state, void *hashval )
rnd512_4way( state, buffer );
else
{ // empty pad block, constant data
msg[0] = m512_const2_64( 0, 0x0000000080000000 );
msg[0] = m512_const1_i128( 0x0000000080000000 );
msg[1] = m512_zero;
rnd512_4way( state, msg );
}
@@ -583,13 +593,13 @@ int luffa512_4way_full( luffa_4way_context *state, void *output,
{
// padding of partial block
msg[0] = _mm512_shuffle_epi8( vdata[ 0 ], shuff_bswap32 );
msg[1] = m512_const2_64( 0, 0x0000000080000000 );
msg[1] = m512_const1_i128( 0x0000000080000000 );
rnd512_4way( state, msg );
}
else
{
// empty pad block
msg[0] = m512_const2_64( 0, 0x0000000080000000 );
msg[0] = m512_const1_i128( 0x0000000080000000 );
msg[1] = m512_zero;
rnd512_4way( state, msg );
}
@@ -631,13 +641,13 @@ int luffa_4way_update_close( luffa_4way_context *state,
{
// padding of partial block
msg[0] = _mm512_shuffle_epi8( vdata[ 0 ], shuff_bswap32 );
msg[1] = m512_const2_64( 0, 0x0000000080000000 );
msg[1] = m512_const1_i128( 0x0000000080000000 );
rnd512_4way( state, msg );
}
else
{
// empty pad block
msg[0] = m512_const2_64( 0, 0x0000000080000000 );
msg[0] = m512_const1_i128( 0x0000000080000000 );
msg[1] = m512_zero;
rnd512_4way( state, msg );
}
@@ -832,7 +842,7 @@ void rnd512_2way( luffa_2way_context *state, __m256i *msg )
__m256i msg0, msg1;
__m256i tmp[2];
__m256i x[8];
const __m256i MASK = m256_const2_64( 0, 0x00000000ffffffff );
const __m256i MASK = m256_const1_i128( 0x00000000ffffffff );
t0 = chainv[0];
t1 = chainv[1];
@@ -1088,7 +1098,7 @@ int luffa_2way_update( luffa_2way_context *state, const void *data,
{
// remaining data bytes
buffer[0] = _mm256_shuffle_epi8( vdata[0], shuff_bswap32 );
buffer[1] = m256_const2_64( 0, 0x0000000080000000 );
buffer[1] = m256_const1_i128( 0x0000000080000000 );
}
return 0;
}
@@ -1104,7 +1114,7 @@ int luffa_2way_close( luffa_2way_context *state, void *hashval )
rnd512_2way( state, buffer );
else
{ // empty pad block, constant data
msg[0] = m256_const2_64( 0, 0x0000000080000000 );
msg[0] = m256_const1_i128( 0x0000000080000000 );
msg[1] = m256_zero;
rnd512_2way( state, msg );
}
@@ -1159,13 +1169,13 @@ int luffa512_2way_full( luffa_2way_context *state, void *output,
{
// padding of partial block
msg[0] = _mm256_shuffle_epi8( vdata[ 0 ], shuff_bswap32 );
msg[1] = m256_const2_64( 0, 0x0000000080000000 );
msg[1] = m256_const1_i128( 0x0000000080000000 );
rnd512_2way( state, msg );
}
else
{
// empty pad block
msg[0] = m256_const2_64( 0, 0x0000000080000000 );
msg[0] = m256_const1_i128( 0x0000000080000000 );
msg[1] = m256_zero;
rnd512_2way( state, msg );
}
@@ -1206,13 +1216,13 @@ int luffa_2way_update_close( luffa_2way_context *state,
{
// padding of partial block
msg[0] = _mm256_shuffle_epi8( vdata[ 0 ], shuff_bswap32 );
msg[1] = m256_const2_64( 0, 0x0000000080000000 );
msg[1] = m256_const1_i128( 0x0000000080000000 );
rnd512_2way( state, msg );
}
else
{
// empty pad block
msg[0] = m256_const2_64( 0, 0x0000000080000000 );
msg[0] = m256_const1_i128( 0x0000000080000000 );
msg[1] = m256_zero;
rnd512_2way( state, msg );
}

View File

@@ -23,7 +23,7 @@
#include "simd-utils.h"
#include "luffa_for_sse2.h"
#define MULT2(a0,a1) do \
#define MULT2( a0, a1 ) do \
{ \
__m128i b = _mm_xor_si128( a0, _mm_shuffle_epi32( _mm_and_si128(a1,MASK), 16 ) ); \
a0 = _mm_or_si128( _mm_srli_si128(b,4), _mm_slli_si128(a1,12) ); \
@@ -345,11 +345,11 @@ HashReturn update_and_final_luffa( hashState_luffa *state, BitSequence* output,
// 16 byte partial block exists for 80 byte len
if ( state->rembytes )
// padding of partial block
rnd512( state, m128_const_64( 0, 0x80000000 ),
rnd512( state, m128_const_i128( 0x80000000 ),
mm128_bswap_32( cast_m128i( data ) ) );
else
// empty pad block
rnd512( state, m128_zero, m128_const_64( 0, 0x80000000 ) );
rnd512( state, m128_zero, m128_const_i128( 0x80000000 ) );
finalization512( state, (uint32*) output );
if ( state->hashbitlen > 512 )
@@ -394,11 +394,11 @@ int luffa_full( hashState_luffa *state, BitSequence* output, int hashbitlen,
// 16 byte partial block exists for 80 byte len
if ( state->rembytes )
// padding of partial block
rnd512( state, m128_const_64( 0, 0x80000000 ),
rnd512( state, m128_const_i128( 0x80000000 ),
mm128_bswap_32( cast_m128i( data ) ) );
else
// empty pad block
rnd512( state, m128_zero, m128_const_64( 0, 0x80000000 ) );
rnd512( state, m128_zero, m128_const_i128( 0x80000000 ) );
finalization512( state, (uint32*) output );
if ( state->hashbitlen > 512 )
@@ -606,7 +606,6 @@ static void finalization512( hashState_luffa *state, uint32 *b )
casti_m256i( b, 0 ) = _mm256_shuffle_epi8(
casti_m256i( hash, 0 ), shuff_bswap32 );
// casti_m256i( b, 0 ) = mm256_bswap_32( casti_m256i( hash, 0 ) );
rnd512( state, zero, zero );
@@ -621,7 +620,6 @@ static void finalization512( hashState_luffa *state, uint32 *b )
casti_m256i( b, 1 ) = _mm256_shuffle_epi8(
casti_m256i( hash, 0 ), shuff_bswap32 );
// casti_m256i( b, 1 ) = mm256_bswap_32( casti_m256i( hash, 0 ) );
}
#else

View File

@@ -77,6 +77,7 @@ static const sph_u32 H256[8] = {
#else // no SHA
/*
static const sph_u32 K[64] = {
SPH_C32(0x428A2F98), SPH_C32(0x71374491),
SPH_C32(0xB5C0FBCF), SPH_C32(0xE9B5DBA5),
@@ -111,6 +112,7 @@ static const sph_u32 K[64] = {
SPH_C32(0x90BEFFFA), SPH_C32(0xA4506CEB),
SPH_C32(0xBEF9A3F7), SPH_C32(0xC67178F2)
};
*/
#if SPH_SMALL_FOOTPRINT_SHA2
@@ -689,6 +691,14 @@ sph_sha256_addbits_and_close(void *cc, unsigned ub, unsigned n, void *dst)
// sph_sha256_init(cc);
}
void sph_sha256_full( void *dst, const void *data, size_t len )
{
sph_sha256_context cc;
sph_sha256_init( &cc );
sph_sha256( &cc, data, len );
sph_sha256_close( &cc, dst );
}
/* see sph_sha2.h */
//void
//sph_sha224_comp(const sph_u32 msg[16], sph_u32 val[8])

View File

@@ -205,6 +205,10 @@ void sph_sha256_comp(const sph_u32 msg[16], sph_u32 val[8]);
#define sph_sha256_comp sph_sha224_comp
#endif
void sph_sha256_full( void *dst, const void *data, size_t len );
#if SPH_64
/**

View File

@@ -23,14 +23,23 @@ static const uint32_t IV512[] =
_mm256_blend_epi32( mm256_ror128_32( a ), \
mm256_ror128_32( b ), 0x88 )
#if defined(__VAES__)
#define mm256_aesenc_2x128( x, k ) \
_mm256_aesenc_epi128( x, _mm256_castsi128_si256( k ) )
#else
#define mm256_aesenc_2x128( x, k ) \
mm256_concat_128( _mm_aesenc_si128( mm128_extr_hi128_256( x ), k ), \
_mm_aesenc_si128( mm128_extr_lo128_256( x ), k ) )
#endif
static void
c512_2way( shavite512_2way_context *ctx, const void *msg )
{
#if defined(__VAES__)
const __m256i zero = _mm256_setzero_si256();
#else
const __m128i zero = _mm_setzero_si128();
#endif
__m256i p0, p1, p2, p3, x;
__m256i k00, k01, k02, k03, k10, k11, k12, k13;
__m256i *m = (__m256i*)msg;
@@ -308,7 +317,7 @@ void shavite512_2way_close( shavite512_2way_context *ctx, void *dst )
uint32_t vp = ctx->ptr>>5;
// Terminating byte then zero pad
casti_m256i( buf, vp++ ) = m256_const2_64( 0, 0x0000000000000080 );
casti_m256i( buf, vp++ ) = m256_const1_i128( 0x0000000000000080 );
// Zero pad full vectors up to count
for ( ; vp < 6; vp++ )
@@ -388,13 +397,13 @@ void shavite512_2way_update_close( shavite512_2way_context *ctx, void *dst,
if ( vp == 0 ) // empty buf, xevan.
{
casti_m256i( buf, 0 ) = m256_const2_64( 0, 0x0000000000000080 );
casti_m256i( buf, 0 ) = m256_const1_i128( 0x0000000000000080 );
memset_zero_256( (__m256i*)buf + 1, 5 );
ctx->count0 = ctx->count1 = ctx->count2 = ctx->count3 = 0;
}
else // half full buf, everyone else.
{
casti_m256i( buf, vp++ ) = m256_const2_64( 0, 0x0000000000000080 );
casti_m256i( buf, vp++ ) = m256_const1_i128( 0x0000000000000080 );
memset_zero_256( (__m256i*)buf + vp, 6 - vp );
}
@@ -478,13 +487,13 @@ void shavite512_2way_full( shavite512_2way_context *ctx, void *dst,
if ( vp == 0 ) // empty buf, xevan.
{
casti_m256i( buf, 0 ) = m256_const2_64( 0, 0x0000000000000080 );
casti_m256i( buf, 0 ) = m256_const1_i128( 0x0000000000000080 );
memset_zero_256( (__m256i*)buf + 1, 5 );
ctx->count0 = ctx->count1 = ctx->count2 = ctx->count3 = 0;
}
else // half full buf, everyone else.
{
casti_m256i( buf, vp++ ) = m256_const2_64( 0, 0x0000000000000080 );
casti_m256i( buf, vp++ ) = m256_const1_i128( 0x0000000000000080 );
memset_zero_256( (__m256i*)buf + vp, 6 - vp );
}

View File

@@ -292,7 +292,7 @@ void shavite512_4way_close( shavite512_4way_context *ctx, void *dst )
uint32_t vp = ctx->ptr>>6;
// Terminating byte then zero pad
casti_m512i( buf, vp++ ) = m512_const2_64( 0, 0x0000000000000080 );
casti_m512i( buf, vp++ ) = m512_const1_i128( 0x0000000000000080 );
// Zero pad full vectors up to count
for ( ; vp < 6; vp++ )
@@ -372,13 +372,13 @@ void shavite512_4way_update_close( shavite512_4way_context *ctx, void *dst,
if ( vp == 0 ) // empty buf, xevan.
{
casti_m512i( buf, 0 ) = m512_const2_64( 0, 0x0000000000000080 );
casti_m512i( buf, 0 ) = m512_const1_i128( 0x0000000000000080 );
memset_zero_512( (__m512i*)buf + 1, 5 );
ctx->count0 = ctx->count1 = ctx->count2 = ctx->count3 = 0;
}
else // half full buf, everyone else.
{
casti_m512i( buf, vp++ ) = m512_const2_64( 0, 0x0000000000000080 );
casti_m512i( buf, vp++ ) = m512_const1_i128( 0x0000000000000080 );
memset_zero_512( (__m512i*)buf + vp, 6 - vp );
}
@@ -463,13 +463,13 @@ void shavite512_4way_full( shavite512_4way_context *ctx, void *dst,
if ( vp == 0 ) // empty buf, xevan.
{
casti_m512i( buf, 0 ) = m512_const2_64( 0, 0x0000000000000080 );
casti_m512i( buf, 0 ) = m512_const1_i128( 0x0000000000000080 );
memset_zero_512( (__m512i*)buf + 1, 5 );
ctx->count0 = ctx->count1 = ctx->count2 = ctx->count3 = 0;
}
else // half full buf, everyone else.
{
casti_m512i( buf, vp++ ) = m512_const2_64( 0, 0x0000000000000080 );
casti_m512i( buf, vp++ ) = m512_const1_i128( 0x0000000000000080 );
memset_zero_512( (__m512i*)buf + vp, 6 - vp );
}

View File

@@ -1,47 +0,0 @@
/*
* Copyright (c) 2000 Jeroen Ruigrok van der Werven <asmodai@FreeBSD.org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* $FreeBSD: src/include/stdbool.h,v 1.6 2002/08/16 07:33:14 alfred Exp $
*/
#ifndef _STDBOOL_H_
#define _STDBOOL_H_
#define __bool_true_false_are_defined 1
#ifndef __cplusplus
#define false 0
#define true 1
//#define bool _Bool
//#if __STDC_VERSION__ < 199901L && __GNUC__ < 3
//typedef int _Bool;
//#endif
typedef int bool;
#endif /* !__cplusplus */
#endif /* !_STDBOOL_H_ */

File diff suppressed because it is too large Load Diff

725
algo/verthash/Verthash.c Normal file
View File

@@ -0,0 +1,725 @@
/*
* Copyright 2018-2021 CryptoGraphics
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version. See LICENSE for more details.
*/
#include "algo-gate-api.h"
#include "Verthash.h"
#include "mm_malloc.h"
//-----------------------------------------------------------------------------
// Verthash info management
int verthash_info_init(verthash_info_t* info, const char* file_name)
{
// init fields to 0
info->fileName = NULL;
info->data = NULL;
info->dataSize = 0;
info->bitmask = 0;
size_t fileNameLen;
if ( !file_name || !( fileNameLen = strlen( file_name ) ) )
{
applog( LOG_ERR, "Invalid file specification" );
return -1;
}
info->fileName = (char*)malloc( fileNameLen + 1 );
if ( !info->fileName )
{
applog( LOG_ERR, "Failed to allocate memory for Verthash data" );
return -1;
}
memset( info->fileName, 0, fileNameLen + 1 );
memcpy( info->fileName, file_name, fileNameLen );
FILE *fileMiningData = fopen_utf8( info->fileName, "rb" );
if ( !fileMiningData )
{
if ( opt_data_file || !opt_verify )
{
if ( opt_data_file )
applog( LOG_ERR,
"Verthash data file not found or invalid: %s", info->fileName );
else
{
applog( LOG_ERR,
"No Verthash data file specified and default not found");
applog( LOG_NOTICE,
"Add '--verify' to create default 'verthash.dat'");
}
return -1;
}
else
{
applog( LOG_NOTICE, "Creating default 'verthash.dat' in current directory, this will take several minutes");
if ( verthash_generate_data_file( info->fileName ) )
return -1;
fileMiningData = fopen_utf8( info->fileName, "rb" );
if ( !fileMiningData )
{
applog( LOG_ERR, "File system error opening %s", info->fileName );
return -1;
}
applog( LOG_NOTICE, "Verthash data file created successfully" );
}
}
// Get file size
fseek(fileMiningData, 0, SEEK_END);
int fileSize = ftell(fileMiningData);
fseek(fileMiningData, 0, SEEK_SET);
if ( fileSize < 0 )
{
fclose(fileMiningData);
return 1;
}
// Allocate data
info->data = (uint8_t *)_mm_malloc( fileSize, 64 );
if (!info->data)
{
fclose(fileMiningData);
// Memory allocation fatal error.
return 2;
}
// Load data
if ( !fread( info->data, fileSize, 1, fileMiningData ) )
{
applog( LOG_ERR, "File system error reading %s", info->fileName );
fclose(fileMiningData);
return -1;
}
fclose(fileMiningData);
// Update fields
info->bitmask = ((fileSize - VH_HASH_OUT_SIZE)/VH_BYTE_ALIGNMENT) + 1;
info->dataSize = fileSize;
applog( LOG_NOTICE, "Using Verthash data file '%s'", info->fileName );
return 0;
}
//-----------------------------------------------------------------------------
void verthash_info_free(verthash_info_t* info)
{
free(info->fileName);
free(info->data);
info->dataSize = 0;
info->bitmask = 0;
}
//-----------------------------------------------------------------------------
// Verthash hash
#define VH_P0_SIZE 64
#define VH_N_ITER 8
#define VH_N_SUBSET VH_P0_SIZE*VH_N_ITER
#define VH_N_ROT 32
#define VH_N_INDEXES 4096
#define VH_BYTE_ALIGNMENT 16
static inline uint32_t fnv1a(const uint32_t a, const uint32_t b)
{
return (a ^ b) * 0x1000193;
}
void verthash_hash( const unsigned char* blob_bytes,
const size_t blob_size,
const unsigned char(*input)[VH_HEADER_SIZE],
unsigned char(*output)[VH_HASH_OUT_SIZE] )
{
unsigned char p1[ VH_HASH_OUT_SIZE ] __attribute__ ((aligned (64)));
unsigned char p0[ VH_N_SUBSET ] __attribute__ ((aligned (64)));
uint32_t seek_indexes[VH_N_INDEXES] __attribute__ ((aligned (64)));
uint32_t* p0_index = (uint32_t*)p0;
verthash_sha3_512_final_8( p0, ( (uint64_t*)input )[ 9 ] );
for ( size_t x = 0; x < VH_N_ROT; ++x )
{
memcpy( seek_indexes + x * (VH_N_SUBSET / sizeof(uint32_t)),
p0, VH_N_SUBSET);
#if defined(__AVX2__)
for ( size_t y = 0; y < VH_N_SUBSET / sizeof(__m256i); y += 8)
{
casti_m256i( p0_index, y ) = mm256_rol_32(
casti_m256i( p0_index, y ), 1 );
casti_m256i( p0_index, y+1 ) = mm256_rol_32(
casti_m256i( p0_index, y+1 ), 1 );
casti_m256i( p0_index, y+2 ) = mm256_rol_32(
casti_m256i( p0_index, y+2 ), 1 );
casti_m256i( p0_index, y+3 ) = mm256_rol_32(
casti_m256i( p0_index, y+3 ), 1 );
casti_m256i( p0_index, y+4 ) = mm256_rol_32(
casti_m256i( p0_index, y+4 ), 1 );
casti_m256i( p0_index, y+5 ) = mm256_rol_32(
casti_m256i( p0_index, y+5 ), 1 );
casti_m256i( p0_index, y+6 ) = mm256_rol_32(
casti_m256i( p0_index, y+6 ), 1 );
casti_m256i( p0_index, y+7 ) = mm256_rol_32(
casti_m256i( p0_index, y+7 ), 1 );
}
#else
for ( size_t y = 0; y < VH_N_SUBSET / sizeof(__m128i); y += 8)
{
casti_m128i( p0_index, y ) = mm128_rol_32(
casti_m128i( p0_index, y ), 1 );
casti_m128i( p0_index, y+1 ) = mm128_rol_32(
casti_m128i( p0_index, y+1 ), 1 );
casti_m128i( p0_index, y+2 ) = mm128_rol_32(
casti_m128i( p0_index, y+2 ), 1 );
casti_m128i( p0_index, y+3 ) = mm128_rol_32(
casti_m128i( p0_index, y+3 ), 1 );
casti_m128i( p0_index, y+4 ) = mm128_rol_32(
casti_m128i( p0_index, y+4 ), 1 );
casti_m128i( p0_index, y+5 ) = mm128_rol_32(
casti_m128i( p0_index, y+5 ), 1 );
casti_m128i( p0_index, y+6 ) = mm128_rol_32(
casti_m128i( p0_index, y+6 ), 1 );
casti_m128i( p0_index, y+7 ) = mm128_rol_32(
casti_m128i( p0_index, y+7 ), 1 );
}
#endif
}
sha3( &input[0], VH_HEADER_SIZE, &p1[0], VH_HASH_OUT_SIZE );
uint32_t* p1_32 = (uint32_t*)p1;
uint32_t* blob_bytes_32 = (uint32_t*)blob_bytes;
uint32_t value_accumulator = 0x811c9dc5;
const uint32_t mdiv = ( ( blob_size - VH_HASH_OUT_SIZE )
/ VH_BYTE_ALIGNMENT ) + 1;
#if defined (__AVX2__)
const __m256i k = _mm256_set1_epi32( 0x1000193 );
#elif defined(__SSE41__)
const __m128i k = _mm_set1_epi32( 0x1000193 );
#endif
for ( size_t i = 0; i < VH_N_INDEXES; i++ )
{
const uint32_t offset =
( fnv1a( seek_indexes[i], value_accumulator) % mdiv )
* ( VH_BYTE_ALIGNMENT / sizeof(uint32_t) );
const uint32_t *blob_off = blob_bytes_32 + offset;
// update value accumulator for next seek index
value_accumulator = fnv1a( value_accumulator, blob_off[0] );
value_accumulator = fnv1a( value_accumulator, blob_off[1] );
value_accumulator = fnv1a( value_accumulator, blob_off[2] );
value_accumulator = fnv1a( value_accumulator, blob_off[3] );
value_accumulator = fnv1a( value_accumulator, blob_off[4] );
value_accumulator = fnv1a( value_accumulator, blob_off[5] );
value_accumulator = fnv1a( value_accumulator, blob_off[6] );
value_accumulator = fnv1a( value_accumulator, blob_off[7] );
#if defined (__AVX2__)
*(__m256i*)p1_32 = _mm256_mullo_epi32( _mm256_xor_si256(
*(__m256i*)p1_32, *(__m256i*)blob_off ), k );
#elif defined(__SSE41__)
casti_m128i( p1_32, 0 ) = _mm_mullo_epi32( _mm_xor_si128(
casti_m128i( p1_32, 0 ), casti_m128i( blob_off, 0 ) ), k );
casti_m128i( p1_32, 1 ) = _mm_mullo_epi32( _mm_xor_si128(
casti_m128i( p1_32, 1 ), casti_m128i( blob_off, 1 ) ), k );
#else
for ( size_t i2 = 0; i2 < VH_HASH_OUT_SIZE / sizeof(uint32_t); i2++ )
p1_32[i2] = fnv1a( p1_32[i2], blob_off[i2] );
#endif
}
memcpy( output, p1, VH_HASH_OUT_SIZE );
}
//-----------------------------------------------------------------------------
// Verthash data file generator
#define NODE_SIZE 32
struct Graph
{
FILE *db;
int64_t log2;
int64_t pow2;
uint8_t *pk;
int64_t index;
};
int64_t Log2(int64_t x)
{
int64_t r = 0;
for (; x > 1; x >>= 1)
{
r++;
}
return r;
}
int64_t bfsToPost(struct Graph *g, const int64_t node)
{
return node & ~g->pow2;
}
int64_t numXi(int64_t index)
{
return (1 << ((uint64_t)index)) * (index + 1) * index;
}
void WriteId(struct Graph *g, uint8_t *Node, const int64_t id)
{
fseek(g->db, id * NODE_SIZE, SEEK_SET);
fwrite(Node, 1, NODE_SIZE, g->db);
}
void WriteNode(struct Graph *g, uint8_t *Node, const int64_t id)
{
const int64_t idx = bfsToPost(g, id);
WriteId(g, Node, idx);
}
void NewNode(struct Graph *g, const int64_t id, uint8_t *hash)
{
WriteNode(g, hash, id);
}
uint8_t *GetId(struct Graph *g, const int64_t id)
{
fseek(g->db, id * NODE_SIZE, SEEK_SET);
uint8_t *node = (uint8_t *)malloc(NODE_SIZE);
const size_t bytes_read = fread(node, 1, NODE_SIZE, g->db);
if(bytes_read != NODE_SIZE) {
return NULL;
}
return node;
}
uint8_t *GetNode(struct Graph *g, const int64_t id)
{
const int64_t idx = bfsToPost(g, id);
return GetId(g, idx);
}
uint32_t WriteVarInt(uint8_t *buffer, int64_t val)
{
memset(buffer, 0, NODE_SIZE);
uint64_t uval = ((uint64_t)(val)) << 1;
if (val < 0)
{
uval = ~uval;
}
uint32_t i = 0;
while (uval >= 0x80)
{
buffer[i] = (uint8_t)uval | 0x80;
uval >>= 7;
i++;
}
buffer[i] = (uint8_t)uval;
return i;
}
void ButterflyGraph(struct Graph *g, int64_t index, int64_t *count)
{
if (index == 0)
{
index = 1;
}
int64_t numLevel = 2 * index;
int64_t perLevel = (int64_t)(1 << (uint64_t)index);
int64_t begin = *count - perLevel;
int64_t level, i;
for (level = 1; level < numLevel; level++)
{
for (i = 0; i < perLevel; i++)
{
int64_t prev;
int64_t shift = index - level;
if (level > numLevel / 2)
{
shift = level - numLevel / 2;
}
if (((i >> (uint64_t)shift) & 1) == 0)
{
prev = i + (1 << (uint64_t)shift);
}
else
{
prev = i - (1 << (uint64_t)shift);
}
uint8_t *parent0 = GetNode(g, begin + (level - 1) * perLevel + prev);
uint8_t *parent1 = GetNode(g, *count - perLevel);
uint8_t *buf = (uint8_t *)malloc(NODE_SIZE);
WriteVarInt(buf, *count);
uint8_t *hashInput = (uint8_t *)malloc(NODE_SIZE * 4);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 2), parent0, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 3), parent1, NODE_SIZE);
uint8_t *hashOutput = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 4, hashOutput, NODE_SIZE);
NewNode(g, *count, hashOutput);
(*count)++;
free(hashOutput);
free(hashInput);
free(parent0);
free(parent1);
free(buf);
}
}
}
void XiGraphIter(struct Graph *g, int64_t index)
{
int64_t count = g->pow2;
int8_t stackSize = 5;
int64_t *stack = (int64_t *)malloc(sizeof(int64_t) * stackSize);
for (int i = 0; i < 5; i++)
stack[i] = index;
int8_t graphStackSize = 5;
int32_t *graphStack = (int32_t *)malloc(sizeof(int32_t) * graphStackSize);
for (int i = 0; i < 5; i++)
graphStack[i] = graphStackSize - i - 1;
int64_t i = 0;
int64_t graph = 0;
int64_t pow2index = 1 << ((uint64_t)index);
for (i = 0; i < pow2index; i++)
{
uint8_t *buf = (uint8_t *)malloc(NODE_SIZE);
WriteVarInt(buf, count);
uint8_t *hashInput = (uint8_t *)malloc(NODE_SIZE * 2);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
uint8_t *hashOutput = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 2, hashOutput, NODE_SIZE);
NewNode(g, count, hashOutput);
count++;
free(hashOutput);
free(hashInput);
free(buf);
}
if (index == 1)
{
ButterflyGraph(g, index, &count);
return;
}
while (stackSize != 0 && graphStackSize != 0)
{
index = stack[stackSize - 1];
graph = graphStack[graphStackSize - 1];
stackSize--;
if (stackSize > 0)
{
int64_t *tempStack = (int64_t *)malloc(sizeof(int64_t) * (stackSize));
memcpy(tempStack, stack, sizeof(int64_t) * (stackSize));
free(stack);
stack = tempStack;
}
graphStackSize--;
if (graphStackSize > 0)
{
int32_t *tempGraphStack = (int32_t *)malloc(sizeof(int32_t) * (graphStackSize));
memcpy(tempGraphStack, graphStack, sizeof(int32_t) * (graphStackSize));
free(graphStack);
graphStack = tempGraphStack;
}
int8_t indicesSize = 5;
int64_t *indices = (int64_t *)malloc(sizeof(int64_t) * indicesSize);
for (int i = 0; i < indicesSize; i++)
indices[i] = index - 1;
int8_t graphsSize = 5;
int32_t *graphs = (int32_t *)malloc(sizeof(int32_t) * graphsSize);
for (int i = 0; i < graphsSize; i++)
graphs[i] = graphsSize - i - 1;
int64_t pow2indexInner = 1 << ((uint64_t)index);
int64_t pow2indexInner_1 = 1 << ((uint64_t)index - 1);
if (graph == 0)
{
uint64_t sources = count - pow2indexInner;
for (i = 0; i < pow2indexInner_1; i++)
{
uint8_t *parent0 = GetNode(g, sources + i);
uint8_t *parent1 = GetNode(g, sources + i + pow2indexInner_1);
uint8_t *buf = (uint8_t *)malloc(NODE_SIZE);
WriteVarInt(buf, count);
uint8_t *hashInput = (uint8_t *)malloc(NODE_SIZE * 4);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 2), parent0, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 3), parent1, NODE_SIZE);
uint8_t *hashOutput = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 4, hashOutput, NODE_SIZE);
NewNode(g, count, hashOutput);
count++;
free(hashOutput);
free(hashInput);
free(parent0);
free(parent1);
free(buf);
}
}
else if (graph == 1)
{
uint64_t firstXi = count;
for (i = 0; i < pow2indexInner_1; i++)
{
uint64_t nodeId = firstXi + i;
uint8_t *parent = GetNode(g, firstXi - pow2indexInner_1 + i);
uint8_t *buf = (uint8_t *)malloc(NODE_SIZE);
WriteVarInt(buf, nodeId);
uint8_t *hashInput = (uint8_t *)malloc(NODE_SIZE * 3);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 2), parent, NODE_SIZE);
uint8_t *hashOutput = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 3, hashOutput, NODE_SIZE);
NewNode(g, count, hashOutput);
count++;
free(hashOutput);
free(hashInput);
free(parent);
free(buf);
}
}
else if (graph == 2)
{
uint64_t secondXi = count;
for (i = 0; i < pow2indexInner_1; i++)
{
uint64_t nodeId = secondXi + i;
uint8_t *parent = GetNode(g, secondXi - pow2indexInner_1 + i);
uint8_t *buf = (uint8_t *)malloc(NODE_SIZE);
WriteVarInt(buf, nodeId);
uint8_t *hashInput = (uint8_t *)malloc(NODE_SIZE * 3);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 2), parent, NODE_SIZE);
uint8_t *hashOutput = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 3, hashOutput, NODE_SIZE);
NewNode(g, count, hashOutput);
count++;
free(hashOutput);
free(hashInput);
free(parent);
free(buf);
}
}
else if (graph == 3)
{
uint64_t secondButter = count;
for (i = 0; i < pow2indexInner_1; i++)
{
uint64_t nodeId = secondButter + i;
uint8_t *parent = GetNode(g, secondButter - pow2indexInner_1 + i);
uint8_t *buf = (uint8_t *)malloc(NODE_SIZE);
WriteVarInt(buf, nodeId);
uint8_t *hashInput = (uint8_t *)malloc(NODE_SIZE * 3);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 2), parent, NODE_SIZE);
uint8_t *hashOutput = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 3, hashOutput, NODE_SIZE);
NewNode(g, count, hashOutput);
count++;
free(hashOutput);
free(hashInput);
free(parent);
free(buf);
}
}
else
{
uint64_t sinks = count;
uint64_t sources = sinks + pow2indexInner - numXi(index);
for (i = 0; i < pow2indexInner_1; i++)
{
uint64_t nodeId0 = sinks + i;
uint64_t nodeId1 = sinks + i + pow2indexInner_1;
uint8_t *parent0 = GetNode(g, sinks - pow2indexInner_1 + i);
uint8_t *parent1_0 = GetNode(g, sources + i);
uint8_t *parent1_1 = GetNode(g, sources + i + pow2indexInner_1);
uint8_t *buf = (uint8_t *)malloc(NODE_SIZE);
WriteVarInt(buf, nodeId0);
uint8_t *hashInput = (uint8_t *)malloc(NODE_SIZE * 4);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 2), parent0, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 3), parent1_0, NODE_SIZE);
uint8_t *hashOutput0 = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 4, hashOutput0, NODE_SIZE);
WriteVarInt(buf, nodeId1);
memcpy(hashInput, g->pk, NODE_SIZE);
memcpy(hashInput + NODE_SIZE, buf, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 2), parent0, NODE_SIZE);
memcpy(hashInput + (NODE_SIZE * 3), parent1_1, NODE_SIZE);
uint8_t *hashOutput1 = (uint8_t *)malloc(NODE_SIZE);
sha3(hashInput, NODE_SIZE * 4, hashOutput1, NODE_SIZE);
NewNode(g, nodeId0, hashOutput0);
NewNode(g, nodeId1, hashOutput1);
count += 2;
free(parent0);
free(parent1_0);
free(parent1_1);
free(buf);
free(hashInput);
free(hashOutput0);
free(hashOutput1);
}
}
if ((graph == 0 || graph == 3) ||
((graph == 1 || graph == 2) && index == 2))
{
ButterflyGraph(g, index - 1, &count);
}
else if (graph == 1 || graph == 2)
{
int64_t *tempStack = (int64_t *)malloc(sizeof(int64_t) * (stackSize + indicesSize));
memcpy(tempStack, stack, stackSize * sizeof(int64_t));
memcpy(tempStack + stackSize, indices, indicesSize * sizeof(int64_t));
stackSize += indicesSize;
free(stack);
stack = tempStack;
int32_t *tempGraphStack = (int32_t *)malloc(sizeof(int32_t) * (graphStackSize + graphsSize));
memcpy(tempGraphStack, graphStack, graphStackSize * sizeof(int32_t));
memcpy(tempGraphStack + graphStackSize, graphs, graphsSize * sizeof(int32_t));
graphStackSize += graphsSize;
free(graphStack);
graphStack = tempGraphStack;
}
free(indices);
free(graphs);
}
free(stack);
free(graphStack);
}
struct Graph *NewGraph(int64_t index, const char* targetFile, uint8_t *pk)
{
uint8_t exists = 0;
FILE *db;
if ((db = fopen_utf8(targetFile, "r")) != NULL)
{
fclose(db);
exists = 1;
}
db = fopen_utf8(targetFile, "wb+");
int64_t size = numXi(index);
int64_t log2 = Log2(size) + 1;
int64_t pow2 = 1 << ((uint64_t)log2);
struct Graph *g = (struct Graph *)malloc(sizeof(struct Graph));
if ( !g ) return NULL;
g->db = db;
g->log2 = log2;
g->pow2 = pow2;
g->pk = pk;
g->index = index;
if (exists == 0)
{
XiGraphIter(g, index);
}
fclose(db);
return g;
}
//-----------------------------------------------------------------------------
// use info for _mm_malloc, then verify file
int verthash_generate_data_file(const char* output_file_name)
{
const char *hashInput = "Verthash Proof-of-Space Datafile";
uint8_t *pk = (uint8_t*)malloc( NODE_SIZE );
if ( !pk )
{
applog( LOG_ERR, "Verthash data memory allocation failed");
return -1;
}
sha3( hashInput, 32, pk, NODE_SIZE );
int64_t index = 17;
if ( !NewGraph( index, output_file_name, pk ) )
{
applog( LOG_ERR, "Verthash file creation failed");
return -1;
}
return 0;
}

59
algo/verthash/Verthash.h Normal file
View File

@@ -0,0 +1,59 @@
/*
* Copyright 2018-2021 CryptoGraphics
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version. See LICENSE for more details.
*/
#ifndef Verthash_INCLUDE_ONCE
#define Verthash_INCLUDE_ONCE
#include "tiny_sha3/sha3.h"
#include "fopen_utf8.h"
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>
// Verthash constants used to compute bitmask, used inside kernel during IO pass
#define VH_HASH_OUT_SIZE 32
#define VH_BYTE_ALIGNMENT 16
#define VH_HEADER_SIZE 80
//-----------------------------------------------------------------------------
// Verthash data
//! Verthash C api for data maniputation.
typedef struct VerthashInfo
{
char* fileName;
uint8_t* data;
uint64_t dataSize;
uint32_t bitmask;
} verthash_info_t;
//! Must be called before usage. Reset all fields and set a mining data file name.
//! Error codes
//! 0 - Success(No error).
//! 1 - File name is invalid.
//! 2 - Memory allocation error
int verthash_info_init(verthash_info_t* info, const char* file_name);
//! Reset all fields and free allocated data.
void verthash_info_free(verthash_info_t* info);
//! Generate verthash data file and save it to specified location.
int verthash_generate_data_file(const char* output_file_name);
void verthash_hash(const unsigned char* blob_bytes,
const size_t blob_size,
const unsigned char(*input)[VH_HEADER_SIZE],
unsigned char(*output)[VH_HASH_OUT_SIZE]);
void verthash_sha3_512_prehash_72( const void *input );
void verthash_sha3_512_final_8( void *hash, const uint64_t nonce );
#endif // !Verthash_INCLUDE_ONCE

181
algo/verthash/fopen_utf8.c Normal file
View File

@@ -0,0 +1,181 @@
#ifndef H_FOPEN_UTF8
#define H_FOPEN_UTF8
#include "fopen_utf8.h"
#include <stdint.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
int utf8_char_size(const uint8_t *c)
{
const uint8_t m0x = 0x80, c0x = 0x00,
m10x = 0xC0, c10x = 0x80,
m110x = 0xE0, c110x = 0xC0,
m1110x = 0xF0, c1110x = 0xE0,
m11110x = 0xF8, c11110x = 0xF0;
if ((c[0] & m0x) == c0x)
return 1;
if ((c[0] & m110x) == c110x)
if ((c[1] & m10x) == c10x)
return 2;
if ((c[0] & m1110x) == c1110x)
if ((c[1] & m10x) == c10x)
if ((c[2] & m10x) == c10x)
return 3;
if ((c[0] & m11110x) == c11110x)
if ((c[1] & m10x) == c10x)
if ((c[2] & m10x) == c10x)
if ((c[3] & m10x) == c10x)
return 4;
if ((c[0] & m10x) == c10x) // not a first UTF-8 byte
return 0;
return -1; // if c[0] is a first byte but the other bytes don't match
}
uint32_t utf8_to_unicode32(const uint8_t *c, size_t *index)
{
uint32_t v;
int size;
const uint8_t m6 = 63, m5 = 31, m4 = 15, m3 = 7;
if (c==NULL)
return 0;
size = utf8_char_size(c);
if (size > 0 && index)
*index += size-1;
switch (size)
{
case 1:
v = c[0];
break;
case 2:
v = c[0] & m5;
v = v << 6 | (c[1] & m6);
break;
case 3:
v = c[0] & m4;
v = v << 6 | (c[1] & m6);
v = v << 6 | (c[2] & m6);
break;
case 4:
v = c[0] & m3;
v = v << 6 | (c[1] & m6);
v = v << 6 | (c[2] & m6);
v = v << 6 | (c[3] & m6);
break;
case 0: // not a first UTF-8 byte
case -1: // corrupt UTF-8 letter
default:
v = -1;
break;
}
return v;
}
int codepoint_utf16_size(uint32_t c)
{
if (c < 0x10000) return 1;
if (c < 0x110000) return 2;
return 0;
}
uint16_t *sprint_utf16(uint16_t *str, uint32_t c) // str must be able to hold 1 to 3 entries and will be null-terminated by this function
{
int c_size;
if (str==NULL)
return NULL;
c_size = codepoint_utf16_size(c);
switch (c_size)
{
case 1:
str[0] = c;
if (c > 0)
str[1] = '\0';
break;
case 2:
c -= 0x10000;
str[0] = 0xD800 + (c >> 10);
str[1] = 0xDC00 + (c & 0x3FF);
str[2] = '\0';
break;
default:
str[0] = '\0';
}
return str;
}
size_t strlen_utf8_to_utf16(const uint8_t *str)
{
size_t i, count;
uint32_t c;
for (i=0, count=0; ; i++)
{
if (str[i]==0)
return count;
c = utf8_to_unicode32(&str[i], &i);
count += codepoint_utf16_size(c);
}
}
uint16_t *utf8_to_utf16(const uint8_t *utf8, uint16_t *utf16)
{
size_t i, j;
uint32_t c;
if (utf8==NULL)
return NULL;
if (utf16==NULL)
utf16 = (uint16_t *) calloc(strlen_utf8_to_utf16(utf8) + 1, sizeof(uint16_t));
for (i=0, j=0, c=1; c; i++)
{
c = utf8_to_unicode32(&utf8[i], &i);
sprint_utf16(&utf16[j], c);
j += codepoint_utf16_size(c);
}
return utf16;
}
FILE *fopen_utf8(const char *path, const char *mode)
{
#ifdef _WIN32
wchar_t *wpath, wmode[8];
FILE *file;
if (utf8_to_utf16((const uint8_t *) mode, (uint16_t *) wmode)==NULL)
return NULL;
wpath = (wchar_t *) utf8_to_utf16((const uint8_t *) path, NULL);
if (wpath==NULL)
return NULL;
file = _wfopen(wpath, wmode);
free(wpath);
return file;
#else
return fopen(path, mode);
#endif
}
#endif

View File

@@ -0,0 +1,25 @@
#ifndef H_FOPEN_UTF8
#define H_FOPEN_UTF8
#ifdef __cplusplus
extern "C" {
#endif
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <stddef.h>
int utf8_char_size(const uint8_t *c);
uint32_t utf8_to_unicode32(const uint8_t *c, size_t *index);
int codepoint_utf16_size(uint32_t c);
uint16_t *sprint_utf16(uint16_t *str, uint32_t c);
size_t strlen_utf8_to_utf16(const uint8_t *str);
uint16_t *utf8_to_utf16(const uint8_t *utf8, uint16_t *utf16);
FILE *fopen_utf8(const char *path, const char *mode);
#ifdef __cplusplus
}
#endif
#endif

View File

@@ -0,0 +1,301 @@
#if defined(__AVX2__)
// sha3-4way.c
// 19-Nov-11 Markku-Juhani O. Saarinen <mjos@iki.fi>
// vectorization by JayDDee 2021-03-27
//
// Revised 07-Aug-15 to match with official release of FIPS PUB 202 "SHA3"
// Revised 03-Sep-15 for portability + OpenSSL - style API
#include "sha3-4way.h"
// constants
static const uint64_t keccakf_rndc[24] = {
0x0000000000000001, 0x0000000000008082, 0x800000000000808a,
0x8000000080008000, 0x000000000000808b, 0x0000000080000001,
0x8000000080008081, 0x8000000000008009, 0x000000000000008a,
0x0000000000000088, 0x0000000080008009, 0x000000008000000a,
0x000000008000808b, 0x800000000000008b, 0x8000000000008089,
0x8000000000008003, 0x8000000000008002, 0x8000000000000080,
0x000000000000800a, 0x800000008000000a, 0x8000000080008081,
0x8000000000008080, 0x0000000080000001, 0x8000000080008008
};
void sha3_4way_keccakf( __m256i st[25] )
{
int i, j, r;
__m256i t, bc[5];
for ( r = 0; r < KECCAKF_ROUNDS; r++ )
{
// Theta
bc[0] = _mm256_xor_si256( st[0],
mm256_xor4( st[5], st[10], st[15], st[20] ) );
bc[1] = _mm256_xor_si256( st[1],
mm256_xor4( st[6], st[11], st[16], st[21] ) );
bc[2] = _mm256_xor_si256( st[2],
mm256_xor4( st[7], st[12], st[17], st[22] ) );
bc[3] = _mm256_xor_si256( st[3],
mm256_xor4( st[8], st[13], st[18], st[23] ) );
bc[4] = _mm256_xor_si256( st[4],
mm256_xor4( st[9], st[14], st[19], st[24] ) );
for ( i = 0; i < 5; i++ )
{
t = _mm256_xor_si256( bc[ (i+4) % 5 ],
mm256_rol_64( bc[ (i+1) % 5 ], 1 ) );
st[ i ] = _mm256_xor_si256( st[ i ], t );
st[ i+5 ] = _mm256_xor_si256( st[ i+ 5 ], t );
st[ i+10 ] = _mm256_xor_si256( st[ i+10 ], t );
st[ i+15 ] = _mm256_xor_si256( st[ i+15 ], t );
st[ i+20 ] = _mm256_xor_si256( st[ i+20 ], t );
}
// Rho Pi
#define RHO_PI( i, c ) \
bc[0] = st[ i ]; \
st[ i ] = mm256_rol_64( t, c ); \
t = bc[0]
t = st[1];
RHO_PI( 10, 1 );
RHO_PI( 7, 3 );
RHO_PI( 11, 6 );
RHO_PI( 17, 10 );
RHO_PI( 18, 15 );
RHO_PI( 3, 21 );
RHO_PI( 5, 28 );
RHO_PI( 16, 36 );
RHO_PI( 8, 45 );
RHO_PI( 21, 55 );
RHO_PI( 24, 2 );
RHO_PI( 4, 14 );
RHO_PI( 15, 27 );
RHO_PI( 23, 41 );
RHO_PI( 19, 56 );
RHO_PI( 13, 8 );
RHO_PI( 12, 25 );
RHO_PI( 2, 43 );
RHO_PI( 20, 62 );
RHO_PI( 14, 18 );
RHO_PI( 22, 39 );
RHO_PI( 9, 61 );
RHO_PI( 6, 20 );
RHO_PI( 1, 44 );
#undef RHO_PI
// Chi
for ( j = 0; j < 25; j += 5 )
{
memcpy( bc, &st[ j ], 5*32 );
st[ j ] = _mm256_xor_si256( st[ j ],
_mm256_andnot_si256( bc[1], bc[2] ) );
st[ j+1 ] = _mm256_xor_si256( st[ j+1 ],
_mm256_andnot_si256( bc[2], bc[3] ) );
st[ j+2 ] = _mm256_xor_si256( st[ j+2 ],
_mm256_andnot_si256( bc[3], bc[4] ) );
st[ j+3 ] = _mm256_xor_si256( st[ j+3 ],
_mm256_andnot_si256( bc[4], bc[0] ) );
st[ j+4 ] = _mm256_xor_si256( st[ j+4 ],
_mm256_andnot_si256( bc[0], bc[1] ) );
}
// Iota
st[0] = _mm256_xor_si256( st[0],
_mm256_set1_epi64x( keccakf_rndc[ r ] ) );
}
}
int sha3_4way_init( sha3_4way_ctx_t *c, int mdlen )
{
for ( int i = 0; i < 25; i++ ) c->st[ i ] = m256_zero;
c->mdlen = mdlen;
c->rsiz = 200 - 2 * mdlen;
c->pt = 0;
return 1;
}
int sha3_4way_update( sha3_4way_ctx_t *c, const void *data, size_t len )
{
size_t i;
int j = c->pt;
const int rsiz = c->rsiz / 8;
const int l = len / 8;
for ( i = 0; i < l; i++ )
{
c->st[ j ] = _mm256_xor_si256( c->st[ j ],
( (const __m256i*)data )[i] );
j++;
if ( j >= rsiz )
{
sha3_4way_keccakf( c->st );
j = 0;
}
}
c->pt = j;
return 1;
}
int sha3_4way_final( void *md, sha3_4way_ctx_t *c )
{
c->st[ c->pt ] = _mm256_xor_si256( c->st[ c->pt ],
m256_const1_64( 6 ) );
c->st[ c->rsiz / 8 - 1 ] =
_mm256_xor_si256( c->st[ c->rsiz / 8 - 1 ],
m256_const1_64( 0x8000000000000000 ) );
sha3_4way_keccakf( c->st );
memcpy( md, c->st, c->mdlen * 4 );
return 1;
}
void *sha3_4way( const void *in, size_t inlen, void *md, int mdlen )
{
sha3_4way_ctx_t ctx;
sha3_4way_init( &ctx, mdlen);
sha3_4way_update( &ctx, in, inlen );
sha3_4way_final( md, &ctx );
return md;
}
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
void sha3_8way_keccakf( __m512i st[25] )
{
int i, j, r;
__m512i t, bc[5];
// actual iteration
for ( r = 0; r < KECCAKF_ROUNDS; r++ )
{
// Theta
for ( i = 0; i < 5; i++ )
bc[i] = _mm512_xor_si512( st[i],
mm512_xor4( st[ i+5 ], st[ i+10 ], st[ i+15 ], st[i+20 ] ) );
for ( i = 0; i < 5; i++ )
{
t = _mm512_xor_si512( bc[(i + 4) % 5],
_mm512_rol_epi64( bc[(i + 1) % 5], 1 ) );
for ( j = 0; j < 25; j += 5 )
st[j + i] = _mm512_xor_si512( st[j + i], t );
}
// Rho Pi
#define RHO_PI( i, c ) \
bc[0] = st[ i ]; \
st[ i ] = _mm512_rol_epi64( t, c ); \
t = bc[0]
t = st[1];
RHO_PI( 10, 1 );
RHO_PI( 7, 3 );
RHO_PI( 11, 6 );
RHO_PI( 17, 10 );
RHO_PI( 18, 15 );
RHO_PI( 3, 21 );
RHO_PI( 5, 28 );
RHO_PI( 16, 36 );
RHO_PI( 8, 45 );
RHO_PI( 21, 55 );
RHO_PI( 24, 2 );
RHO_PI( 4, 14 );
RHO_PI( 15, 27 );
RHO_PI( 23, 41 );
RHO_PI( 19, 56 );
RHO_PI( 13, 8 );
RHO_PI( 12, 25 );
RHO_PI( 2, 43 );
RHO_PI( 20, 62 );
RHO_PI( 14, 18 );
RHO_PI( 22, 39 );
RHO_PI( 9, 61 );
RHO_PI( 6, 20 );
RHO_PI( 1, 44 );
#undef RHO_PI
// Chi
for ( j = 0; j < 25; j += 5 )
{
for ( i = 0; i < 5; i++ )
bc[i] = st[j + i];
for ( i = 0; i < 5; i++ )
st[ j+i ] = _mm512_xor_si512( st[ j+i ], _mm512_andnot_si512(
bc[ (i+1) % 5 ], bc[ (i+2) % 5 ] ) );
}
// Iota
st[0] = _mm512_xor_si512( st[0], _mm512_set1_epi64( keccakf_rndc[r] ) );
}
}
// Initialize the context for SHA3
int sha3_8way_init( sha3_8way_ctx_t *c, int mdlen )
{
for ( int i = 0; i < 25; i++ ) c->st[ i ] = m512_zero;
c->mdlen = mdlen;
c->rsiz = 200 - 2 * mdlen;
c->pt = 0;
return 1;
}
// update state with more data
int sha3_8way_update( sha3_8way_ctx_t *c, const void *data, size_t len )
{
size_t i;
int j = c->pt;
const int rsiz = c->rsiz / 8;
const int l = len / 8;
for ( i = 0; i < l; i++ )
{
c->st[ j ] = _mm512_xor_si512( c->st[ j ],
( (const __m512i*)data )[i] );
j++;
if ( j >= rsiz )
{
sha3_8way_keccakf( c->st );
j = 0;
}
}
c->pt = j;
return 1;
}
// finalize and output a hash
int sha3_8way_final( void *md, sha3_8way_ctx_t *c )
{
c->st[ c->pt ] =
_mm512_xor_si512( c->st[ c->pt ],
m512_const1_64( 6 ) );
c->st[ c->rsiz / 8 - 1 ] =
_mm512_xor_si512( c->st[ c->rsiz / 8 - 1 ],
m512_const1_64( 0x8000000000000000 ) );
sha3_8way_keccakf( c->st );
memcpy( md, c->st, c->mdlen * 8 );
return 1;
}
// compute a SHA-3 hash (md) of given byte length from "in"
void *sha3_8way( const void *in, size_t inlen, void *md, int mdlen )
{
sha3_8way_ctx_t sha3;
sha3_8way_init( &sha3, mdlen);
sha3_8way_update( &sha3, in, inlen );
sha3_8way_final( md, &sha3 );
return md;
}
#endif // AVX512
#endif // AVX2

View File

@@ -0,0 +1,67 @@
// sha3.h
// 19-Nov-11 Markku-Juhani O. Saarinen <mjos@iki.fi>
// 2021-03-27 JayDDee
//
#ifndef SHA3_4WAY_H
#define SHA3_4WAY_H
#include <stddef.h>
#include <stdint.h>
#include "simd-utils.h"
#if defined(__cplusplus)
extern "C" {
#endif
#ifndef KECCAKF_ROUNDS
#define KECCAKF_ROUNDS 24
#endif
#if defined(__AVX2__)
typedef struct
{
__m256i st[25]; // 64-bit words * 4 lanes
int pt, rsiz, mdlen; // these don't overflow
} sha3_4way_ctx_t __attribute__ ((aligned (64)));;
// Compression function.
void sha3_4way_keccakf( __m256i st[25] );
// OpenSSL - like interfece
int sha3_4way_init( sha3_4way_ctx_t *c, int mdlen ); // mdlen = hash output in bytes
int sha3_4way_update( sha3_4way_ctx_t *c, const void *data, size_t len );
int sha3_4way_final( void *md, sha3_4way_ctx_t *c ); // digest goes to md
// compute a sha3 hash (md) of given byte length from "in"
void *sha3_4way( const void *in, size_t inlen, void *md, int mdlen );
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
// state context
typedef struct
{
__m512i st[25]; // 64-bit words * 8 lanes
int pt, rsiz, mdlen; // these don't overflow
} sha3_8way_ctx_t __attribute__ ((aligned (64)));;
// Compression function.
void sha3_8way_keccakf( __m512i st[25] );
// OpenSSL - like interfece
int sha3_8way_init( sha3_8way_ctx_t *c, int mdlen ); // mdlen = hash output in bytes
int sha3_8way_update( sha3_8way_ctx_t *c, const void *data, size_t len );
int sha3_8way_final( void *md, sha3_8way_ctx_t *c ); // digest goes to md
// compute a sha3 hash (md) of given byte length from "in"
void *sha3_8way( const void *in, size_t inlen, void *md, int mdlen );
#endif // AVX512
#endif // AVX2
#if defined(__cplusplus)
}
#endif
#endif

View File

@@ -0,0 +1,226 @@
// sha3.c
// 19-Nov-11 Markku-Juhani O. Saarinen <mjos@iki.fi>
// Revised 07-Aug-15 to match with official release of FIPS PUB 202 "SHA3"
// Revised 03-Sep-15 for portability + OpenSSL - style API
#include "sha3.h"
#include <string.h>
// update the state with given number of rounds
void sha3_keccakf(uint64_t st[25])
{
// constants
const uint64_t keccakf_rndc[24] = {
0x0000000000000001, 0x0000000000008082, 0x800000000000808a,
0x8000000080008000, 0x000000000000808b, 0x0000000080000001,
0x8000000080008081, 0x8000000000008009, 0x000000000000008a,
0x0000000000000088, 0x0000000080008009, 0x000000008000000a,
0x000000008000808b, 0x800000000000008b, 0x8000000000008089,
0x8000000000008003, 0x8000000000008002, 0x8000000000000080,
0x000000000000800a, 0x800000008000000a, 0x8000000080008081,
0x8000000000008080, 0x0000000080000001, 0x8000000080008008
};
/*
const int keccakf_rotc[24] = {
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 2, 14,
27, 41, 56, 8, 25, 43, 62, 18, 39, 61, 20, 44
};
const int keccakf_piln[24] = {
10, 7, 11, 17, 18, 3, 5, 16, 8, 21, 24, 4,
15, 23, 19, 13, 12, 2, 20, 14, 22, 9, 6, 1
};
*/
// variables
int i, j, r;
uint64_t t, bc[5];
#if __BYTE_ORDER__ != __ORDER_LITTLE_ENDIAN__
uint8_t *v;
// endianess conversion. this is redundant on little-endian targets
for (i = 0; i < 25; i++) {
v = (uint8_t *) &st[i];
st[i] = ((uint64_t) v[0]) | (((uint64_t) v[1]) << 8) |
(((uint64_t) v[2]) << 16) | (((uint64_t) v[3]) << 24) |
(((uint64_t) v[4]) << 32) | (((uint64_t) v[5]) << 40) |
(((uint64_t) v[6]) << 48) | (((uint64_t) v[7]) << 56);
}
#endif
// actual iteration
for (r = 0; r < KECCAKF_ROUNDS; r++) {
// Theta
for (i = 0; i < 5; i++)
bc[i] = st[i] ^ st[i + 5] ^ st[i + 10] ^ st[i + 15] ^ st[i + 20];
for (i = 0; i < 5; i++) {
t = bc[(i + 4) % 5] ^ ROTL64(bc[(i + 1) % 5], 1);
for (j = 0; j < 25; j += 5)
st[j + i] ^= t;
}
// Rho Pi
#define RHO_PI( i, c ) \
bc[0] = st[ i ]; \
st[ i ] = ROTL64( t, c ); \
t = bc[0]
t = st[1];
RHO_PI( 10, 1 );
RHO_PI( 7, 3 );
RHO_PI( 11, 6 );
RHO_PI( 17, 10 );
RHO_PI( 18, 15 );
RHO_PI( 3, 21 );
RHO_PI( 5, 28 );
RHO_PI( 16, 36 );
RHO_PI( 8, 45 );
RHO_PI( 21, 55 );
RHO_PI( 24, 2 );
RHO_PI( 4, 14 );
RHO_PI( 15, 27 );
RHO_PI( 23, 41 );
RHO_PI( 19, 56 );
RHO_PI( 13, 8 );
RHO_PI( 12, 25 );
RHO_PI( 2, 43 );
RHO_PI( 20, 62 );
RHO_PI( 14, 18 );
RHO_PI( 22, 39 );
RHO_PI( 9, 61 );
RHO_PI( 6, 20 );
RHO_PI( 1, 44 );
#undef RHO_PI
/*
for (i = 0; i < 24; i++) {
j = keccakf_piln[i];
bc[0] = st[j];
st[j] = ROTL64(t, keccakf_rotc[i]);
t = bc[0];
}
*/
// Chi
for (j = 0; j < 25; j += 5) {
for (i = 0; i < 5; i++)
bc[i] = st[j + i];
for (i = 0; i < 5; i++)
st[j + i] ^= (~bc[(i + 1) % 5]) & bc[(i + 2) % 5];
}
// Iota
st[0] ^= keccakf_rndc[r];
}
#if __BYTE_ORDER__ != __ORDER_LITTLE_ENDIAN__
// endianess conversion. this is redundant on little-endian targets
for (i = 0; i < 25; i++) {
v = (uint8_t *) &st[i];
t = st[i];
v[0] = t & 0xFF;
v[1] = (t >> 8) & 0xFF;
v[2] = (t >> 16) & 0xFF;
v[3] = (t >> 24) & 0xFF;
v[4] = (t >> 32) & 0xFF;
v[5] = (t >> 40) & 0xFF;
v[6] = (t >> 48) & 0xFF;
v[7] = (t >> 56) & 0xFF;
}
#endif
}
// Initialize the context for SHA3
int sha3_init(sha3_ctx_t *c, int mdlen)
{
int i;
for (i = 0; i < 25; i++)
c->st.q[i] = 0;
c->mdlen = mdlen;
c->rsiz = 200 - 2 * mdlen;
c->pt = 0;
return 1;
}
// update state with more data
int sha3_update(sha3_ctx_t *c, const void *data, size_t len)
{
size_t i;
int j = c->pt / 8;
const int rsiz = c->rsiz / 8;
const int l = len / 8;
for ( i = 0; i < l; i++ )
{
c->st.q[ j++ ] ^= ( ((const uint64_t *) data) [i] );
if ( j >= rsiz )
{
sha3_keccakf( c->st.q );
j = 0;
}
}
c->pt = j*8;
return 1;
}
// finalize and output a hash
int sha3_final(void *md, sha3_ctx_t *c)
{
c->st.q[ c->pt / 8 ] ^= 6;
c->st.q[ c->rsiz / 8 - 1 ] ^= 0x8000000000000000;
sha3_keccakf(c->st.q);
memcpy( md, c->st.q, c->mdlen );
return 1;
}
// compute a SHA-3 hash (md) of given byte length from "in"
void *sha3(const void *in, size_t inlen, void *md, int mdlen)
{
sha3_ctx_t sha3;
sha3_init(&sha3, mdlen);
sha3_update(&sha3, in, inlen);
sha3_final(md, &sha3);
return md;
}
// SHAKE128 and SHAKE256 extensible-output functionality
void shake_xof(sha3_ctx_t *c)
{
c->st.b[c->pt] ^= 0x1F;
c->st.b[c->rsiz - 1] ^= 0x80;
sha3_keccakf(c->st.q);
c->pt = 0;
}
void shake_out(sha3_ctx_t *c, void *out, size_t len)
{
size_t i;
int j;
j = c->pt;
for (i = 0; i < len; i++) {
if (j >= c->rsiz) {
sha3_keccakf(c->st.q);
j = 0;
}
((uint8_t *) out)[i] = c->st.b[j++];
}
c->pt = j;
}

View File

@@ -0,0 +1,55 @@
// sha3.h
// 19-Nov-11 Markku-Juhani O. Saarinen <mjos@iki.fi>
#ifndef SHA3_H
#define SHA3_H
#include <stddef.h>
#include <stdint.h>
#if defined(__cplusplus)
extern "C" {
#endif
#ifndef KECCAKF_ROUNDS
#define KECCAKF_ROUNDS 24
#endif
#ifndef ROTL64
#define ROTL64(x, y) (((x) << (y)) | ((x) >> (64 - (y))))
#endif
// state context
typedef struct {
union { // state:
uint8_t b[200]; // 8-bit bytes
uint64_t q[25]; // 64-bit words
} st;
int pt, rsiz, mdlen; // these don't overflow
} sha3_ctx_t;
// Compression function.
void sha3_keccakf(uint64_t st[25]);
// OpenSSL - like interfece
int sha3_init(sha3_ctx_t *c, int mdlen); // mdlen = hash output in bytes
int sha3_update(sha3_ctx_t *c, const void *data, size_t len);
int sha3_final(void *md, sha3_ctx_t *c); // digest goes to md
// compute a sha3 hash (md) of given byte length from "in"
void *sha3(const void *in, size_t inlen, void *md, int mdlen);
// SHAKE128 and SHAKE256 extensible-output functions
#define shake128_init(c) sha3_init(c, 16)
#define shake256_init(c) sha3_init(c, 32)
#define shake_update sha3_update
void shake_xof(sha3_ctx_t *c);
void shake_out(sha3_ctx_t *c, void *out, size_t len);
#if defined(__cplusplus)
}
#endif
#endif

View File

@@ -0,0 +1,178 @@
#include "algo-gate-api.h"
#include "algo/sha/sph_sha2.h"
#include "Verthash.h"
#include "tiny_sha3/sha3-4way.h"
static verthash_info_t verthashInfo;
// Verthash data file hash in bytes for verification
// 0x48aa21d7afededb63976d48a8ff8ec29d5b02563af4a1110b056cd43e83155a5
static const uint8_t verthashDatFileHash_bytes[32] =
{ 0xa5, 0x55, 0x31, 0xe8, 0x43, 0xcd, 0x56, 0xb0,
0x10, 0x11, 0x4a, 0xaf, 0x63, 0x25, 0xb0, 0xd5,
0x29, 0xec, 0xf8, 0x8f, 0x8a, 0xd4, 0x76, 0x39,
0xb6, 0xed, 0xed, 0xaf, 0xd7, 0x21, 0xaa, 0x48 };
#if defined(__AVX2__)
static __thread sha3_4way_ctx_t sha3_mid_ctxA;
static __thread sha3_4way_ctx_t sha3_mid_ctxB;
#else
static __thread sha3_ctx_t sha3_mid_ctx[8];
#endif
void verthash_sha3_512_prehash_72( const void *input )
{
#if defined(__AVX2__)
__m256i vin[10];
mm256_intrlv80_4x64( vin, input );
sha3_4way_init( &sha3_mid_ctxA, 64 );
sha3_4way_init( &sha3_mid_ctxB, 64 );
vin[0] = _mm256_add_epi8( vin[0], _mm256_set_epi64x( 4,3,2,1 ) );
sha3_4way_update( &sha3_mid_ctxA, vin, 72 );
vin[0] = _mm256_add_epi8( vin[0], _mm256_set1_epi64x( 4 ) );
sha3_4way_update( &sha3_mid_ctxB, vin, 72 );
#else
char in[80] __attribute__ ((aligned (64)));
memcpy( in, input, 80 );
for ( int i = 0; i < 8; i++ )
{
in[0] += 1;
sha3_init( &sha3_mid_ctx[i], 64 );
sha3_update( &sha3_mid_ctx[i], in, 72 );
}
#endif
}
void verthash_sha3_512_final_8( void *hash, const uint64_t nonce )
{
#if defined(__AVX2__)
__m256i vhashA[ 10 ] __attribute__ ((aligned (64)));
__m256i vhashB[ 10 ] __attribute__ ((aligned (64)));
sha3_4way_ctx_t ctx;
__m256i vnonce = _mm256_set1_epi64x( nonce );
memcpy( &ctx, &sha3_mid_ctxA, sizeof ctx );
sha3_4way_update( &ctx, &vnonce, 8 );
sha3_4way_final( vhashA, &ctx );
memcpy( &ctx, &sha3_mid_ctxB, sizeof ctx );
sha3_4way_update( &ctx, &vnonce, 8 );
sha3_4way_final( vhashB, &ctx );
dintrlv_4x64( hash, hash+64, hash+128, hash+192, vhashA, 512 );
dintrlv_4x64( hash+256, hash+320, hash+384, hash+448, vhashB, 512 );
#else
for ( int i = 0; i < 8; i++ )
{
sha3_ctx_t ctx;
memcpy( &ctx, &sha3_mid_ctx[i], sizeof ctx );
sha3_update( &ctx, &nonce, 8 );
sha3_final( hash + i*64, &ctx );
}
#endif
}
int scanhash_verthash( struct work *work, uint32_t max_nonce,
uint64_t *hashes_done, struct thr_info *mythr )
{
uint32_t edata[20] __attribute__((aligned(64)));
uint32_t hash[8] __attribute__((aligned(64)));
uint32_t *pdata = work->data;
uint32_t *ptarget = work->target;
const uint32_t first_nonce = pdata[19];
const uint32_t last_nonce = max_nonce - 1;
uint32_t n = first_nonce;
const int thr_id = mythr->id;
const bool bench = opt_benchmark;
mm128_bswap32_80( edata, pdata );
verthash_sha3_512_prehash_72( edata );
do
{
edata[19] = n;
verthash_hash( verthashInfo.data, verthashInfo.dataSize,
(const unsigned char (*)[80]) edata,
(unsigned char (*)[32]) hash );
if ( valid_hash( hash, ptarget ) && !bench )
{
pdata[19] = bswap_32( n );
submit_solution( work, hash, mythr );
}
n++;
} while ( n < last_nonce && !work_restart[thr_id].restart );
*hashes_done = n - first_nonce;
pdata[19] = n;
return 0;
}
const char *default_verthash_data_file = "verthash.dat";
bool register_verthash_algo( algo_gate_t* gate )
{
opt_target_factor = 256.0;
gate->scanhash = (void*)&scanhash_verthash;
gate->optimizations = AVX2_OPT;
char *verthash_data_file = opt_data_file ? opt_data_file
: default_verthash_data_file;
int vhLoadResult = verthash_info_init( &verthashInfo, verthash_data_file );
if (vhLoadResult == 0) // No Error
{
if ( opt_verify )
{
uint8_t vhDataFileHash[32] = { 0 };
applog( LOG_NOTICE, "Verifying Verthash data" );
sph_sha256_full( vhDataFileHash, verthashInfo.data,
verthashInfo.dataSize );
if ( memcmp( vhDataFileHash, verthashDatFileHash_bytes,
sizeof(verthashDatFileHash_bytes) ) == 0 )
applog( LOG_NOTICE, "Verthash data has been verified" );
else
{
applog( LOG_ERR, "Verthash data verification has failed" );
return false;
}
}
}
else
{
// Handle Verthash error codes
if ( vhLoadResult == 1 )
{
applog( LOG_ERR, "Verthash data file not found: %s", verthash_data_file );
if ( !opt_data_file )
applog( LOG_NOTICE, "Add '--verify' to create verthash.dat");
}
else if ( vhLoadResult == 2 )
applog( LOG_ERR, "Failed to allocate memory for Verthash data" );
// else // for debugging purposes
// applog( LOG_ERR, "Verthash data initialization unknown error code: %d",
// vhLoadResult );
return false;
}
printf("\n");
return true;
}

View File

@@ -16,8 +16,7 @@
#if defined (X16R_8WAY)
// Perform midstate prehash of hash functions with block size <= 64 bytes
// and interleave 4x64 before nonce insertion for final hash.
// Perform midstate prehash of hash functions with block size <= 72 bytes.
void x16r_8way_prehash( void *vdata, void *pdata )
{
@@ -34,6 +33,11 @@ void x16r_8way_prehash( void *vdata, void *pdata )
jh512_8way_init( &x16r_ctx.jh );
jh512_8way_update( &x16r_ctx.jh, vdata, 64 );
break;
case KECCAK:
mm512_bswap32_intrlv80_8x64( vdata, pdata );
keccak512_8way_init( &x16r_ctx.keccak );
keccak512_8way_update( &x16r_ctx.keccak, vdata, 72 );
break;
case SKEIN:
mm512_bswap32_intrlv80_8x64( vdata, pdata );
skein512_8way_init( &x16r_ctx.skein );
@@ -173,13 +177,13 @@ int x16r_8way_hash_generic( void* output, const void* input, int thrid )
hash7, vhash );
break;
case KECCAK:
keccak512_8way_init( &ctx.keccak );
if ( i == 0 )
keccak512_8way_update( &ctx.keccak, input, size );
if ( i == 0 )
keccak512_8way_update( &ctx.keccak, input + (72<<3), 8 );
else
{
intrlv_8x64( vhash, in0, in1, in2, in3, in4, in5, in6, in7,
size<<3 );
keccak512_8way_init( &ctx.keccak );
keccak512_8way_update( &ctx.keccak, vhash, size );
}
keccak512_8way_close( &ctx.keccak, vhash );
@@ -490,6 +494,7 @@ int scanhash_x16r_8way( struct work *work, uint32_t max_nonce,
{
x16_r_s_getAlgoString( (const uint8_t*)bedata1, x16r_hash_order );
s_ntime = ntime;
if ( opt_debug && !thr_id )
applog( LOG_INFO, "hash order %s (%08x)", x16r_hash_order, ntime );
}
@@ -533,6 +538,11 @@ void x16r_4way_prehash( void *vdata, void *pdata )
jh512_4way_init( &x16r_ctx.jh );
jh512_4way_update( &x16r_ctx.jh, vdata, 64 );
break;
case KECCAK:
mm256_bswap32_intrlv80_4x64( vdata, pdata );
keccak512_4way_init( &x16r_ctx.keccak );
keccak512_4way_update( &x16r_ctx.keccak, vdata, 72 );
break;
case SKEIN:
mm256_bswap32_intrlv80_4x64( vdata, pdata );
skein512_4way_prehash64( &x16r_ctx.skein, vdata );
@@ -646,12 +656,12 @@ int x16r_4way_hash_generic( void* output, const void* input, int thrid )
dintrlv_4x64_512( hash0, hash1, hash2, hash3, vhash );
break;
case KECCAK:
keccak512_4way_init( &ctx.keccak );
if ( i == 0 )
keccak512_4way_update( &ctx.keccak, input, size );
if ( i == 0 )
keccak512_4way_update( &ctx.keccak, input + (72<<2), 8 );
else
{
intrlv_4x64( vhash, in0, in1, in2, in3, size<<3 );
keccak512_4way_init( &ctx.keccak );
keccak512_4way_update( &ctx.keccak, vhash, size );
}
keccak512_4way_close( &ctx.keccak, vhash );
@@ -883,7 +893,7 @@ int scanhash_x16r_4way( struct work *work, uint32_t max_nonce,
x16_r_s_getAlgoString( (const uint8_t*)bedata1, x16r_hash_order );
s_ntime = ntime;
if ( opt_debug && !thr_id )
applog( LOG_INFO, "hash order %s (%08x)", x16r_hash_order, ntime );
applog( LOG_INFO, "hash order %s (%08x)", x16r_hash_order, ntime );
}
x16r_4way_prehash( vdata, pdata );

20
configure vendored
View File

@@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for cpuminer-opt 3.15.5.
# Generated by GNU Autoconf 2.69 for cpuminer-opt 3.16.2.
#
#
# Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc.
@@ -577,8 +577,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='cpuminer-opt'
PACKAGE_TARNAME='cpuminer-opt'
PACKAGE_VERSION='3.15.5'
PACKAGE_STRING='cpuminer-opt 3.15.5'
PACKAGE_VERSION='3.16.2'
PACKAGE_STRING='cpuminer-opt 3.16.2'
PACKAGE_BUGREPORT=''
PACKAGE_URL=''
@@ -1332,7 +1332,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures cpuminer-opt 3.15.5 to adapt to many kinds of systems.
\`configure' configures cpuminer-opt 3.16.2 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@@ -1404,7 +1404,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of cpuminer-opt 3.15.5:";;
short | recursive ) echo "Configuration of cpuminer-opt 3.16.2:";;
esac
cat <<\_ACEOF
@@ -1509,7 +1509,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
cpuminer-opt configure 3.15.5
cpuminer-opt configure 3.16.2
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@@ -2012,7 +2012,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by cpuminer-opt $as_me 3.15.5, which was
It was created by cpuminer-opt $as_me 3.16.2, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@@ -2993,7 +2993,7 @@ fi
# Define the identity of the package.
PACKAGE='cpuminer-opt'
VERSION='3.15.5'
VERSION='3.16.2'
cat >>confdefs.h <<_ACEOF
@@ -6690,7 +6690,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by cpuminer-opt $as_me 3.15.5, which was
This file was extended by cpuminer-opt $as_me 3.16.2, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@@ -6756,7 +6756,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
cpuminer-opt config.status 3.15.5
cpuminer-opt config.status 3.16.2
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View File

@@ -1,4 +1,4 @@
AC_INIT([cpuminer-opt], [3.15.5])
AC_INIT([cpuminer-opt], [3.16.2])
AC_PREREQ([2.59c])
AC_CANONICAL_SYSTEM

View File

@@ -112,21 +112,20 @@ char* opt_param_key = NULL;
int opt_param_n = 0;
int opt_param_r = 0;
int opt_n_threads = 0;
bool opt_reset_on_stale = false;
bool opt_sapling = false;
// Windows doesn't support 128 bit affinity mask.
// Need compile time and run time test.
#if defined(__linux) && defined(GCC_INT128)
#define AFFINITY_USES_UINT128 1
uint128_t opt_affinity = -1;
static uint128_t opt_affinity = -1;
static bool affinity_uses_uint128 = true;
#else
uint64_t opt_affinity = -1;
static uint64_t opt_affinity = -1;
static bool affinity_uses_uint128 = false;
#endif
int opt_priority = 0;
int opt_priority = 0; // deprecated
int num_cpus = 1;
int num_cpugroups = 1;
char *rpc_url = NULL;;
@@ -134,6 +133,8 @@ char *rpc_userpass = NULL;
char *rpc_user, *rpc_pass;
char *short_url = NULL;
char *coinbase_address;
char *opt_data_file = NULL;
bool opt_verify = false;
// pk_buffer_size is used as a version selector by b58 code, therefore
// it must be set correctly to work.
@@ -204,6 +205,7 @@ static double lowest_share = 9e99; // lowest accepted share diff
static double last_targetdiff = 0.;
#if !(defined(__WINDOWS__) || defined(_WIN64) || defined(_WIN32))
static uint32_t hi_temp = 0;
static uint32_t prev_temp = 0;
#endif
@@ -490,8 +492,13 @@ static bool get_mininginfo( CURL *curl, struct work *work )
}
key = json_object_get( res, "networkhashps" );
if ( key && json_is_integer( key ) )
net_hashrate = (double) json_integer_value( key );
if ( key )
{
if ( json_is_integer( key ) )
net_hashrate = (double) json_integer_value( key );
else if ( json_is_real( key ) )
net_hashrate = (double) json_real_value( key );
}
key = json_object_get( res, "blocks" );
if ( key && json_is_integer( key ) )
@@ -506,26 +513,7 @@ static bool get_mininginfo( CURL *curl, struct work *work )
// complete missing data from getwork
work->height = (uint32_t) net_blocks + 1;
if ( work->height > g_work.height )
{
restart_threads();
/* redundant with new block log
if ( !opt_quiet )
{
char netinfo[64] = { 0 };
char srate[32] = { 0 };
sprintf( netinfo, "diff %.2f", net_diff );
if ( net_hashrate )
{
format_hashrate( net_hashrate, srate );
strcat( netinfo, ", net " );
strcat( netinfo, srate );
}
applog( LOG_BLUE, "%s block %d, %s",
algo_names[opt_algo], work->height, netinfo );
}
*/
}
} // res
}
json_decref( val );
@@ -567,7 +555,11 @@ static bool gbt_work_decode( const json_t *val, struct work *work )
if ( !s )
continue;
if ( !strcmp( s, "segwit" ) || !strcmp( s, "!segwit" ) )
{
segwit = true;
if ( opt_debug )
applog( LOG_INFO, "GBT: SegWit is enabled" );
}
}
}
// Segwit END
@@ -920,12 +912,12 @@ static bool gbt_work_decode( const json_t *val, struct work *work )
tmp = json_object_get( val, "workid" );
if ( tmp )
{
if ( !json_is_string( tmp ) )
{
applog( LOG_ERR, "JSON invalid workid" );
goto out;
}
work->workid = strdup( json_string_value( tmp ) );
if ( !json_is_string( tmp ) )
{
applog( LOG_ERR, "JSON invalid workid" );
goto out;
}
work->workid = strdup( json_string_value( tmp ) );
}
rc = true;
@@ -966,25 +958,25 @@ void scale_hash_for_display ( double* hashrate, char* prefix )
else { *prefix = 'Y'; *hashrate /= 1e24; }
}
static inline void sprintf_et( char *str, int seconds )
static inline void sprintf_et( char *str, long unsigned int seconds )
{
// sprintf doesn't like uint64_t, Linux thinks it's long, Windows long long.
unsigned int min = seconds / 60;
unsigned int sec = seconds % 60;
unsigned int hrs = min / 60;
long unsigned int min = seconds / 60;
long unsigned int sec = seconds % 60;
long unsigned int hrs = min / 60;
if ( unlikely( hrs ) )
{
unsigned int years = hrs / (24*365);
unsigned int days = hrs / 24;
if ( years )
sprintf( str, "%uy%ud", years, years % 365 );
else if ( days ) //0d00h
sprintf( str, "%ud%02uh", days, hrs % 24 );
long unsigned int days = hrs / 24;
long unsigned int years = days / 365;
if ( years ) // 0y000d
sprintf( str, "%luy%lud", years, years % 365 );
else if ( days ) // 0d00h
sprintf( str, "%lud%02luh", days, hrs % 24 );
else // 0h00m
sprintf( str, "%uh%02um", hrs, min % 60 );
sprintf( str, "%luh%02lum", hrs, min % 60 );
}
else // 0m00s
sprintf( str, "%um%02us", min, sec );
sprintf( str, "%lum%02lus", min, sec );
}
const long double exp32 = EXP32; // 2**32
@@ -1012,32 +1004,67 @@ static struct timeval last_submit_time = {0};
static inline int stats_ptr_incr( int p )
{
return ++p < s_stats_size ? p : 0;
return ++p % s_stats_size;
}
void report_summary_log( bool force )
{
struct timeval now, et, uptime, start_time;
pthread_mutex_lock( &stats_lock );
gettimeofday( &now, NULL );
timeval_subtract( &et, &now, &five_min_start );
if ( !( force && ( submit_sum || ( et.tv_sec > 5 ) ) )
&& ( et.tv_sec < 300 ) )
#if !(defined(__WINDOWS__) || defined(_WIN64) || defined(_WIN32))
// Display CPU temperature and clock rate.
int curr_temp = cpu_temp(0);
static struct timeval cpu_temp_time = {0};
struct timeval diff;
if ( !opt_quiet || ( curr_temp >= 80 ) )
{
pthread_mutex_unlock( &stats_lock );
return;
int wait_time = curr_temp >= 90 ? 5 : curr_temp >= 80 ? 30 :
curr_temp >= 70 ? 60 : 120;
timeval_subtract( &diff, &now, &cpu_temp_time );
if ( ( diff.tv_sec > wait_time )
|| ( ( curr_temp > prev_temp ) && ( curr_temp >= 75 ) ) )
{
char tempstr[32];
float lo_freq = 0., hi_freq = 0.;
memcpy( &cpu_temp_time, &now, sizeof(cpu_temp_time) );
linux_cpu_hilo_freq( &lo_freq, &hi_freq );
if ( use_colors && ( curr_temp >= 70 ) )
{
if ( curr_temp >= 80 )
sprintf( tempstr, "%s%d C%s", CL_RED, curr_temp, CL_WHT );
else
sprintf( tempstr, "%s%d C%s", CL_YLW, curr_temp, CL_WHT );
}
else
sprintf( tempstr, "%d C", curr_temp );
applog( LOG_NOTICE,"CPU temp: curr %s max %d, Freq: %.3f/%.3f GHz",
tempstr, hi_temp, lo_freq / 1e6, hi_freq / 1e6 );
if ( curr_temp > hi_temp ) hi_temp = curr_temp;
prev_temp = curr_temp;
}
}
#endif
if ( !( force && ( submit_sum || ( et.tv_sec > 5 ) ) )
&& ( et.tv_sec < 300 ) )
return;
// collect and reset periodic counters
pthread_mutex_lock( &stats_lock );
uint64_t submits = submit_sum; submit_sum = 0;
uint64_t accepts = accept_sum; accept_sum = 0;
uint64_t rejects = reject_sum; reject_sum = 0;
uint64_t stales = stale_sum; stale_sum = 0;
uint64_t solved = solved_sum; solved_sum = 0;
memcpy( &start_time, &five_min_start, sizeof start_time );
memcpy( &five_min_start, &now, sizeof now );
@@ -1048,12 +1075,12 @@ void report_summary_log( bool force )
double share_time = (double)et.tv_sec + (double)et.tv_usec / 1e6;
double ghrate = global_hashrate;
double shrate = share_time == 0. ? 0. : exp32 * last_targetdiff
* (double)(accepts) / share_time;
double sess_hrate = uptime.tv_sec == 0. ? 0. : exp32 * norm_diff_sum
/ (double)uptime.tv_sec;
double submit_rate = share_time == 0. ? 0. : (double)submits*60.
/ share_time;
double target_diff = exp32 * last_targetdiff;
double shrate = safe_div( target_diff * (double)(accepts),
share_time, 0. );
double sess_hrate = safe_div( exp32 * norm_diff_sum,
(double)uptime.tv_sec, 0. );
double submit_rate = safe_div( (double)submits * 60., share_time, 0. );
char shr_units[4] = {0};
char ghr_units[4] = {0};
char sess_hr_units[4] = {0};
@@ -1070,52 +1097,64 @@ void report_summary_log( bool force )
applog( LOG_BLUE, "%s: %s", algo_names[ opt_algo ], short_url );
applog2( LOG_NOTICE, "Periodic Report %s %s", et_str, upt_str );
applog2( LOG_INFO, "Share rate %.2f/min %.2f/min",
submit_rate, (double)submitted_share_count*60. /
( (double)uptime.tv_sec + (double)uptime.tv_usec / 1e6 ) );
submit_rate, (double)submitted_share_count*60. /
( (double)uptime.tv_sec + (double)uptime.tv_usec / 1e6 ) );
applog2( LOG_INFO, "Hash rate %7.2f%sh/s %7.2f%sh/s (%.2f%sh/s)",
shrate, shr_units, sess_hrate, sess_hr_units,
ghrate, ghr_units );
shrate, shr_units, sess_hrate, sess_hr_units, ghrate, ghr_units );
if ( accepted_share_count < submitted_share_count )
{
double lost_ghrate = uptime.tv_sec == 0 ? 0.
: exp32 * last_targetdiff
* (double)(submitted_share_count - accepted_share_count )
/ (double)uptime.tv_sec;
: target_diff
* (double)(submitted_share_count - accepted_share_count )
/ (double)uptime.tv_sec;
double lost_shrate = share_time == 0. ? 0.
: exp32 * last_targetdiff * (double)(submits - accepts )
/ share_time;
: target_diff * (double)(submits - accepts ) / share_time;
char lshr_units[4] = {0};
char lghr_units[4] = {0};
scale_hash_for_display( &lost_shrate, lshr_units );
scale_hash_for_display( &lost_ghrate, lghr_units );
applog2( LOG_INFO, "Lost hash rate %7.2f%sh/s %7.2f%sh/s",
lost_shrate, lshr_units, lost_ghrate, lghr_units );
applog2( LOG_INFO, "Lost hash rate %7.2f%sh/s %7.2f%sh/s",
lost_shrate, lshr_units, lost_ghrate, lghr_units );
}
applog2( LOG_INFO,"Submitted %6d %6d",
submits, submitted_share_count );
applog2( LOG_INFO,"Accepted %6d %6d",
accepts, accepted_share_count );
applog2( LOG_INFO,"Submitted %7d %7d",
submits, submitted_share_count );
applog2( LOG_INFO, "Accepted %7d %7d %5.1f%%",
accepts, accepted_share_count,
100. * safe_div( (double)accepted_share_count,
(double)submitted_share_count, 0. ) );
if ( stale_share_count )
applog2( LOG_INFO,"Stale %6d %6d",
stales, stale_share_count );
applog2( LOG_INFO, "Stale %7d %7d %5.1f%%",
stales, stale_share_count,
100. * safe_div( (double)stale_share_count,
(double)submitted_share_count, 0. ) );
if ( rejected_share_count )
applog2( LOG_INFO,"Rejected %6d %6d",
rejects, rejected_share_count );
applog2( LOG_INFO, "Rejected %7d %7d %5.1f%%",
rejects, rejected_share_count,
100. * safe_div( (double)rejected_share_count,
(double)submitted_share_count, 0. ) );
if ( solved_block_count )
applog2( LOG_INFO,"Blocks Solved %6d %6d",
solved, solved_block_count );
applog2( LOG_INFO,"Blocks Solved %7d %7d",
solved, solved_block_count );
applog2( LOG_INFO, "Hi/Lo Share Diff %.5g / %.5g",
highest_share, lowest_share );
}
highest_share, lowest_share );
bool lowdiff_debug = false;
int mismatch = submitted_share_count
- ( accepted_share_count + stale_share_count + rejected_share_count );
if ( mismatch )
{
if ( mismatch != 1 )
applog(LOG_WARNING,"Share count mismatch: %d, stats may be incorrect", mismatch );
else
applog(LOG_INFO,"Share count mismatch, submitted share may still be pending" );
}
}
static int share_result( int result, struct work *work,
const char *reason )
{
double share_time = 0.; //, share_ratio = 0.;
double share_time = 0.;
double hashrate = 0.;
int latency = 0;
struct share_stats_t my_stats = {0};
@@ -1156,11 +1195,6 @@ static int share_result( int result, struct work *work,
sizeof last_submit_time );
}
/*
share_ratio = my_stats.net_diff == 0. ? 0. : my_stats.share_diff /
my_stats.net_diff;
*/
// check result
if ( likely( result ) )
{
@@ -1190,9 +1224,11 @@ static int share_result( int result, struct work *work,
{
sprintf( ares, "A%d", accepted_share_count );
sprintf( bres, "B%d", solved_block_count );
stale = work ? work->data[ algo_gate.ntime_index ]
!= g_work.data[ algo_gate.ntime_index ] : false;
if ( reason ) stale = stale || strstr( reason, "job" );
if ( reason )
stale = strstr( reason, "job" );
else if ( work )
stale = work->data[ algo_gate.ntime_index ]
!= g_work.data[ algo_gate.ntime_index ];
if ( stale )
{
stale_share_count++;
@@ -1260,14 +1296,14 @@ static int share_result( int result, struct work *work,
if ( unlikely( !( opt_quiet || result || stale ) ) )
{
uint32_t str[8];
if ( reason )
applog( LOG_WARNING, "Reject reason: %s", reason );
// display share hash and target for troubleshooting
diff_to_hash( str, my_stats.share_diff );
applog2( LOG_INFO, "Hash: %08x%08x%08x...", str[7], str[6], str[5] );
uint32_t *targ;
if ( reason ) applog( LOG_WARNING, "Reject reason: %s", reason );
diff_to_hash( str, my_stats.share_diff );
applog2( LOG_INFO, "Hash: %08x%08x%08x%08x%08x%08x", str[7], str[6],
str[5], str[4], str[3],str[2], str[1], str[0] );
if ( work )
targ = work->target;
else
@@ -1275,7 +1311,8 @@ static int share_result( int result, struct work *work,
diff_to_hash( str, my_stats.target_diff );
targ = &str[0];
}
applog2( LOG_INFO, "Target: %08x%08x%08x...", targ[7], targ[6], targ[5] );
applog2( LOG_INFO, "Target: %08x%08x%08x%08x%08x%08x", targ[7], targ[6],
targ[5], targ[4], targ[3], targ[2], targ[1], targ[0] );
}
return 1;
}
@@ -1580,6 +1617,7 @@ start:
{
double miner_hr = 0.;
double net_hr = net_hashrate;
double nd = net_diff * exp32;
char net_hr_units[4] = {0};
char miner_hr_units[4] = {0};
char net_ttf[32];
@@ -1594,11 +1632,11 @@ start:
pthread_mutex_unlock( &stats_lock );
if ( net_hr > 0. )
sprintf_et( net_ttf, ( net_diff * exp32 ) / net_hr );
sprintf_et( net_ttf, nd / net_hr );
else
sprintf( net_ttf, "NA" );
if ( miner_hr > 0. )
sprintf_et( miner_ttf, ( net_diff * exp32 ) / miner_hr );
sprintf_et( miner_ttf, nd / miner_hr );
else
sprintf( miner_ttf, "NA" );
@@ -1848,10 +1886,19 @@ bool submit_solution( struct work *work, const void *hash,
work->data[ algo_gate.ntime_index ] );
}
if ( unlikely( lowdiff_debug ) )
if ( opt_debug )
{
uint32_t* h = (uint32_t*)hash;
uint32_t* t = (uint32_t*)work->target;
uint32_t* d = (uint32_t*)work->data;
unsigned char *xnonce2str = abin2hex( work->xnonce2,
work->xnonce2_len );
applog(LOG_INFO,"Thread %d, Nonce %08x, Xnonce2 %s", thr->id,
work->data[ algo_gate.nonce_index ], xnonce2str );
free( xnonce2str );
applog(LOG_INFO,"Data[0:19]: %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x", d[0],d[1],d[2],d[3],d[4],d[5],d[6],d[7],d[8],d[9] );
applog(LOG_INFO," : %08x %08x %08x %08x %08x %08x %08x %08x %08x %08x", d[10],d[11],d[12],d[13],d[14],d[15],d[16],d[17],d[18],d[19]);
applog(LOG_INFO,"Hash[7:0]: %08x %08x %08x %08x %08x %08x %08x %08x",
h[7],h[6],h[5],h[4],h[3],h[2],h[1],h[0]);
applog(LOG_INFO,"Targ[7:0]: %08x %08x %08x %08x %08x %08x %08x %08x",
@@ -2066,11 +2113,12 @@ static void stratum_gen_work( struct stratum_ctx *sctx, struct work *g_work )
if ( likely( hr > 0. ) )
{
double nd = net_diff * exp32;
char hr_units[4] = {0};
char block_ttf[32];
char share_ttf[32];
sprintf_et( block_ttf, ( net_diff * exp32 ) / hr );
sprintf_et( block_ttf, nd / hr );
sprintf_et( share_ttf, ( g_work->targetdiff * exp32 ) / hr );
scale_hash_for_display ( &hr, hr_units );
applog2( LOG_INFO, "TTF @ %.2f %sh/s: Block %s, Share %s",
@@ -2086,7 +2134,7 @@ static void stratum_gen_work( struct stratum_ctx *sctx, struct work *g_work )
: et.tv_sec / ( last_block_height - session_first_block );
if ( net_diff && net_ttf )
{
double net_hr = net_diff * exp32 / net_ttf;
double net_hr = nd / net_ttf;
char net_hr_units[4] = {0};
scale_hash_for_display ( &net_hr, net_hr_units );
@@ -2253,12 +2301,6 @@ static void *miner_thread( void *userdata )
if ( unlikely( !algo_gate.ready_to_mine( &work, &stratum, thr_id ) ) )
continue;
// conditional mining
if ( unlikely( !wanna_mine( thr_id ) ) )
{
sleep(5);
continue;
}
// LP_SCANTIME overrides opt_scantime option, is this right?
@@ -2333,6 +2375,8 @@ static void *miner_thread( void *userdata )
pthread_mutex_unlock( &stats_lock );
}
// This code is deprecated, scanhash should never return true.
// This remains as a backup in case some old implementations still exist.
// If unsubmiited nonce(s) found, submit now.
if ( unlikely( nonce_found && !opt_benchmark ) )
{
@@ -2359,48 +2403,6 @@ static void *miner_thread( void *userdata )
}
}
#if !(defined(__WINDOWS__) || defined(_WIN64) || defined(_WIN32))
// Display CPU temperature and clock rate.
int curr_temp, prev_hi_temp;
static struct timeval cpu_temp_time = {0};
pthread_mutex_lock( &stats_lock );
prev_hi_temp = hi_temp;
curr_temp = cpu_temp(0);
if ( curr_temp > hi_temp ) hi_temp = curr_temp;
pthread_mutex_unlock( &stats_lock );
if ( !opt_quiet || ( curr_temp >= 80 ) )
{
int wait_time = curr_temp >= 80 ? 20 : curr_temp >= 70 ? 60 : 120;
timeval_subtract( &diff, &tv_end, &cpu_temp_time );
if ( ( diff.tv_sec > wait_time ) || ( curr_temp > prev_hi_temp ) )
{
char tempstr[32];
float lo_freq = 0., hi_freq = 0.;
memcpy( &cpu_temp_time, &tv_end, sizeof(cpu_temp_time) );
linux_cpu_hilo_freq( &lo_freq, &hi_freq );
if ( use_colors && ( curr_temp >= 70 ) )
{
if ( curr_temp >= 80 )
sprintf( tempstr, "%s%d C%s", CL_RED, curr_temp, CL_WHT );
else
sprintf( tempstr, "%s%d C%s", CL_YLW, curr_temp, CL_WHT );
}
else
sprintf( tempstr, "%d C", curr_temp );
applog( LOG_NOTICE,"CPU temp: curr %s (max %d), Freq: %.3f/%.3f GHz",
tempstr, prev_hi_temp, lo_freq / 1e6, hi_freq / 1e6 );
}
}
#endif
// display hashrate
if ( unlikely( opt_hash_meter ) )
{
@@ -2440,11 +2442,23 @@ static void *miner_thread( void *userdata )
#if ((defined(_WIN64) || defined(__WINDOWS__)) || defined(_WIN32))
applog( LOG_NOTICE, "Total: %s %sH/s", hr, hr_units );
#else
applog( LOG_NOTICE, "Total: %s %sH/s, CPU temp: %dC",
hr, hr_units, (uint32_t)cpu_temp(0) );
float lo_freq = 0., hi_freq = 0.;
linux_cpu_hilo_freq( &lo_freq, &hi_freq );
applog( LOG_NOTICE,
"Total: %s %sH/s, Temp: %dC, Freq: %.3f/%.3f GHz",
hr, hr_units, (uint32_t)cpu_temp(0), lo_freq / 1e6,
hi_freq / 1e6 );
#endif
}
}
} // benchmark
// conditional mining
if ( unlikely( !wanna_mine( thr_id ) ) )
{
sleep(5);
continue;
}
} // miner_thread loop
out:
@@ -2789,6 +2803,189 @@ out:
return NULL;
}
static void show_credits()
{
printf("\n ********** "PACKAGE_NAME" "PACKAGE_VERSION" *********** \n");
printf(" A CPU miner with multi algo support and optimized for CPUs\n");
printf(" with AVX512, SHA and VAES extensions by JayDDee.\n");
printf(" BTC donation address: 12tdvfF7KmAsihBXQXynT6E6th2c2pByTT\n\n");
}
#define check_cpu_capability() cpu_capability( false )
#define display_cpu_capability() cpu_capability( true )
static bool cpu_capability( bool display_only )
{
char cpu_brand[0x40];
bool cpu_has_sse2 = has_sse2();
bool cpu_has_aes = has_aes_ni();
bool cpu_has_sse42 = has_sse42();
bool cpu_has_avx = has_avx();
bool cpu_has_avx2 = has_avx2();
bool cpu_has_sha = has_sha();
bool cpu_has_avx512 = has_avx512();
bool cpu_has_vaes = has_vaes();
bool sw_has_aes = false;
bool sw_has_sse2 = false;
bool sw_has_sse42 = false;
bool sw_has_avx = false;
bool sw_has_avx2 = false;
bool sw_has_avx512 = false;
bool sw_has_sha = false;
bool sw_has_vaes = false;
set_t algo_features = algo_gate.optimizations;
bool algo_has_sse2 = set_incl( SSE2_OPT, algo_features );
bool algo_has_aes = set_incl( AES_OPT, algo_features );
bool algo_has_sse42 = set_incl( SSE42_OPT, algo_features );
bool algo_has_avx2 = set_incl( AVX2_OPT, algo_features );
bool algo_has_avx512 = set_incl( AVX512_OPT, algo_features );
bool algo_has_sha = set_incl( SHA_OPT, algo_features );
bool algo_has_vaes = set_incl( VAES_OPT, algo_features );
bool algo_has_vaes256 = set_incl( VAES256_OPT, algo_features );
bool use_aes;
bool use_sse2;
bool use_sse42;
bool use_avx2;
bool use_avx512;
bool use_sha;
bool use_vaes;
bool use_none;
#ifdef __AES__
sw_has_aes = true;
#endif
#ifdef __SSE2__
sw_has_sse2 = true;
#endif
#ifdef __SSE4_2__
sw_has_sse42 = true;
#endif
#ifdef __AVX__
sw_has_avx = true;
#endif
#ifdef __AVX2__
sw_has_avx2 = true;
#endif
#if (defined(__AVX512F__) && defined(__AVX512DQ__) && defined(__AVX512BW__) && defined(__AVX512VL__))
sw_has_avx512 = true;
#endif
#ifdef __SHA__
sw_has_sha = true;
#endif
#ifdef __VAES__
sw_has_vaes = true;
#endif
// #if !((__AES__) || (__SSE2__))
// printf("Neither __AES__ nor __SSE2__ defined.\n");
// #endif
cpu_brand_string( cpu_brand );
printf( "CPU: %s\n", cpu_brand );
printf("SW built on " __DATE__
#ifdef _MSC_VER
" with VC++ 2013\n");
#elif defined(__GNUC__)
" with GCC");
printf(" %d.%d.%d\n", __GNUC__, __GNUC_MINOR__, __GNUC_PATCHLEVEL__);
#else
printf("\n");
#endif
printf("CPU features: ");
if ( cpu_has_avx512 ) printf( " AVX512" );
else if ( cpu_has_avx2 ) printf( " AVX2 " );
else if ( cpu_has_avx ) printf( " AVX " );
else if ( cpu_has_sse42 ) printf( " SSE4.2" );
else if ( cpu_has_sse2 ) printf( " SSE2 " );
if ( cpu_has_vaes ) printf( " VAES" );
else if ( cpu_has_aes ) printf( " AES" );
if ( cpu_has_sha ) printf( " SHA" );
printf("\nSW features: ");
if ( sw_has_avx512 ) printf( " AVX512" );
else if ( sw_has_avx2 ) printf( " AVX2 " );
else if ( sw_has_avx ) printf( " AVX " );
else if ( sw_has_sse42 ) printf( " SSE4.2" );
else if ( sw_has_sse2 ) printf( " SSE2 " );
if ( sw_has_vaes ) printf( " VAES" );
else if ( sw_has_aes ) printf( " AES" );
if ( sw_has_sha ) printf( " SHA" );
printf("\nAlgo features:");
if ( algo_features == EMPTY_SET ) printf( " None" );
else
{
if ( algo_has_avx512 ) printf( " AVX512" );
else if ( algo_has_avx2 ) printf( " AVX2 " );
else if ( algo_has_sse42 ) printf( " SSE4.2" );
else if ( algo_has_sse2 ) printf( " SSE2 " );
if ( algo_has_vaes ) printf( " VAES" );
else if ( algo_has_aes ) printf( " AES" );
if ( algo_has_sha ) printf( " SHA" );
}
printf("\n");
if ( display_only ) return true;
// Check for CPU and build incompatibilities
if ( !cpu_has_sse2 )
{
printf( "A CPU with SSE2 is required to use cpuminer-opt\n" );
return false;
}
if ( sw_has_avx2 && !( cpu_has_avx2 && cpu_has_aes ) )
{
printf( "The SW build requires a CPU with AES and AVX2!\n" );
return false;
}
if ( sw_has_sse42 && !cpu_has_sse42 )
{
printf( "The SW build requires a CPU with SSE4.2!\n" );
return false;
}
if ( sw_has_aes && !cpu_has_aes )
{
printf( "The SW build requires a CPU with AES!\n" );
return false;
}
if ( sw_has_sha && !cpu_has_sha )
{
printf( "The SW build requires a CPU with SHA!\n" );
return false;
}
// Determine mining options
use_sse2 = cpu_has_sse2 && algo_has_sse2;
use_aes = cpu_has_aes && sw_has_aes && algo_has_aes;
use_sse42 = cpu_has_sse42 && sw_has_sse42 && algo_has_sse42;
use_avx2 = cpu_has_avx2 && sw_has_avx2 && algo_has_avx2;
use_avx512 = cpu_has_avx512 && sw_has_avx512 && algo_has_avx512;
use_sha = cpu_has_sha && sw_has_sha && algo_has_sha;
use_vaes = cpu_has_vaes && sw_has_vaes && algo_has_vaes
&& ( use_avx512 || algo_has_vaes256 );
use_none = !( use_sse2 || use_aes || use_sse42 || use_avx512 || use_avx2 ||
use_sha || use_vaes );
// Display best options
printf( "\nStarting miner with" );
if ( use_none ) printf( " no optimizations" );
else
{
if ( use_avx512 ) printf( " AVX512" );
else if ( use_avx2 ) printf( " AVX2" );
else if ( use_sse42 ) printf( " SSE4.2" );
else if ( use_sse2 ) printf( " SSE2" );
if ( use_vaes ) printf( " VAES" );
else if ( use_aes ) printf( " AES" );
if ( use_sha ) printf( " SHA" );
}
printf( "...\n\n" );
return true;
}
void show_version_and_exit(void)
{
printf("\n built on " __DATE__
@@ -2836,7 +3033,6 @@ void show_version_and_exit(void)
#endif
"\n\n");
/* dependencies versions */
printf("%s\n", curl_version());
#ifdef JANSSON_VERSION
printf("jansson/%s ", JANSSON_VERSION);
@@ -2848,7 +3044,6 @@ void show_version_and_exit(void)
exit(0);
}
void show_usage_and_exit(int status)
{
if (status)
@@ -3185,14 +3380,12 @@ void parse_arg(int key, char *arg )
ul = strtoull( p, NULL, 16 );
else
ul = atoll( arg );
// if ( ul > ( 1ULL << num_cpus ) - 1ULL )
// ul = -1LL;
#if AFFINITY_USES_UINT128
// replicate the low 64 bits to make a full 128 bit mask if there are more
// than 64 CPUs, otherwise zero extend the upper half.
opt_affinity = (uint128_t)ul;
if ( num_cpus > 64 )
opt_affinity = (opt_affinity << 64 ) | opt_affinity;
opt_affinity |= opt_affinity << 64;
#else
opt_affinity = ul;
#endif
@@ -3201,6 +3394,8 @@ void parse_arg(int key, char *arg )
v = atoi(arg);
if (v < 0 || v > 5) /* sanity check */
show_usage_and_exit(1);
// option is deprecated, show warning
applog( LOG_WARNING, "High priority mining threads may cause system instability");
opt_priority = v;
break;
case 'N': // N parameter for various scrypt algos
@@ -3236,11 +3431,15 @@ void parse_arg(int key, char *arg )
case 1024:
opt_randomize = true;
break;
case 1026:
opt_reset_on_stale = true;
case 1027: // data-file
opt_data_file = strdup( arg );
break;
case 1028: // verify
opt_verify = true;
break;
case 'V':
show_version_and_exit();
display_cpu_capability();
exit(0);
case 'h':
show_usage_and_exit(0);
@@ -3357,185 +3556,6 @@ static int thread_create(struct thr_info *thr, void* func)
return err;
}
static void show_credits()
{
printf("\n ********** "PACKAGE_NAME" "PACKAGE_VERSION" *********** \n");
printf(" A CPU miner with multi algo support and optimized for CPUs\n");
printf(" with AVX512, SHA and VAES extensions by JayDDee.\n");
printf(" BTC donation address: 12tdvfF7KmAsihBXQXynT6E6th2c2pByTT\n\n");
}
bool check_cpu_capability ()
{
char cpu_brand[0x40];
bool cpu_has_sse2 = has_sse2();
bool cpu_has_aes = has_aes_ni();
bool cpu_has_sse42 = has_sse42();
bool cpu_has_avx = has_avx();
bool cpu_has_avx2 = has_avx2();
bool cpu_has_sha = has_sha();
bool cpu_has_avx512 = has_avx512();
bool cpu_has_vaes = has_vaes();
bool sw_has_aes = false;
bool sw_has_sse2 = false;
bool sw_has_sse42 = false;
bool sw_has_avx = false;
bool sw_has_avx2 = false;
bool sw_has_avx512 = false;
bool sw_has_sha = false;
bool sw_has_vaes = false;
set_t algo_features = algo_gate.optimizations;
bool algo_has_sse2 = set_incl( SSE2_OPT, algo_features );
bool algo_has_aes = set_incl( AES_OPT, algo_features );
bool algo_has_sse42 = set_incl( SSE42_OPT, algo_features );
bool algo_has_avx2 = set_incl( AVX2_OPT, algo_features );
bool algo_has_avx512 = set_incl( AVX512_OPT, algo_features );
bool algo_has_sha = set_incl( SHA_OPT, algo_features );
bool algo_has_vaes = set_incl( VAES_OPT, algo_features );
bool algo_has_vaes256 = set_incl( VAES256_OPT, algo_features );
bool use_aes;
bool use_sse2;
bool use_sse42;
bool use_avx2;
bool use_avx512;
bool use_sha;
bool use_vaes;
bool use_none;
#ifdef __AES__
sw_has_aes = true;
#endif
#ifdef __SSE2__
sw_has_sse2 = true;
#endif
#ifdef __SSE4_2__
sw_has_sse42 = true;
#endif
#ifdef __AVX__
sw_has_avx = true;
#endif
#ifdef __AVX2__
sw_has_avx2 = true;
#endif
#if (defined(__AVX512F__) && defined(__AVX512DQ__) && defined(__AVX512BW__) && defined(__AVX512VL__))
sw_has_avx512 = true;
#endif
#ifdef __SHA__
sw_has_sha = true;
#endif
#ifdef __VAES__
sw_has_vaes = true;
#endif
// #if !((__AES__) || (__SSE2__))
// printf("Neither __AES__ nor __SSE2__ defined.\n");
// #endif
cpu_brand_string( cpu_brand );
printf( "CPU: %s\n", cpu_brand );
printf("SW built on " __DATE__
#ifdef _MSC_VER
" with VC++ 2013\n");
#elif defined(__GNUC__)
" with GCC");
printf(" %d.%d.%d\n", __GNUC__, __GNUC_MINOR__, __GNUC_PATCHLEVEL__);
#else
printf("\n");
#endif
printf("CPU features: ");
if ( cpu_has_avx512 ) printf( " AVX512" );
else if ( cpu_has_avx2 ) printf( " AVX2 " );
else if ( cpu_has_avx ) printf( " AVX " );
else if ( cpu_has_sse42 ) printf( " SSE4.2" );
else if ( cpu_has_sse2 ) printf( " SSE2 " );
if ( cpu_has_vaes ) printf( " VAES" );
else if ( cpu_has_aes ) printf( " AES" );
if ( cpu_has_sha ) printf( " SHA" );
printf("\nSW features: ");
if ( sw_has_avx512 ) printf( " AVX512" );
else if ( sw_has_avx2 ) printf( " AVX2 " );
else if ( sw_has_avx ) printf( " AVX " );
else if ( sw_has_sse42 ) printf( " SSE4.2" );
else if ( sw_has_sse2 ) printf( " SSE2 " );
if ( sw_has_vaes ) printf( " VAES" );
else if ( sw_has_aes ) printf( " AES" );
if ( sw_has_sha ) printf( " SHA" );
printf("\nAlgo features:");
if ( algo_features == EMPTY_SET ) printf( " None" );
else
{
if ( algo_has_avx512 ) printf( " AVX512" );
else if ( algo_has_avx2 ) printf( " AVX2 " );
else if ( algo_has_sse42 ) printf( " SSE4.2" );
else if ( algo_has_sse2 ) printf( " SSE2 " );
if ( algo_has_vaes ) printf( " VAES" );
else if ( algo_has_aes ) printf( " AES" );
if ( algo_has_sha ) printf( " SHA" );
}
printf("\n");
// Check for CPU and build incompatibilities
if ( !cpu_has_sse2 )
{
printf( "A CPU with SSE2 is required to use cpuminer-opt\n" );
return false;
}
if ( sw_has_avx2 && !( cpu_has_avx2 && cpu_has_aes ) )
{
printf( "The SW build requires a CPU with AES and AVX2!\n" );
return false;
}
if ( sw_has_sse42 && !cpu_has_sse42 )
{
printf( "The SW build requires a CPU with SSE4.2!\n" );
return false;
}
if ( sw_has_aes && !cpu_has_aes )
{
printf( "The SW build requires a CPU with AES!\n" );
return false;
}
if ( sw_has_sha && !cpu_has_sha )
{
printf( "The SW build requires a CPU with SHA!\n" );
return false;
}
// Determine mining options
use_sse2 = cpu_has_sse2 && algo_has_sse2;
use_aes = cpu_has_aes && sw_has_aes && algo_has_aes;
use_sse42 = cpu_has_sse42 && sw_has_sse42 && algo_has_sse42;
use_avx2 = cpu_has_avx2 && sw_has_avx2 && algo_has_avx2;
use_avx512 = cpu_has_avx512 && sw_has_avx512 && algo_has_avx512;
use_sha = cpu_has_sha && sw_has_sha && algo_has_sha;
use_vaes = cpu_has_vaes && sw_has_vaes && algo_has_vaes
&& ( use_avx512 || algo_has_vaes256 );
use_none = !( use_sse2 || use_aes || use_sse42 || use_avx512 || use_avx2 ||
use_sha || use_vaes );
// Display best options
printf( "\nStarting miner with" );
if ( use_none ) printf( " no optimizations" );
else
{
if ( use_avx512 ) printf( " AVX512" );
else if ( use_avx2 ) printf( " AVX2" );
else if ( use_sse42 ) printf( " SSE4.2" );
else if ( use_sse2 ) printf( " SSE2" );
if ( use_vaes ) printf( " VAES" );
else if ( use_aes ) printf( " AES" );
if ( use_sha ) printf( " SHA" );
}
printf( "...\n\n" );
return true;
}
void get_defconfig_path(char *out, size_t bufsize, char *argv0);
int main(int argc, char *argv[])
@@ -3597,6 +3617,11 @@ int main(int argc, char *argv[])
fprintf(stderr, "%s: no algo supplied\n", argv[0]);
show_usage_and_exit(1);
}
if ( !register_algo_gate( opt_algo, &algo_gate ) ) exit(1);
if ( !check_cpu_capability() ) exit(1);
if ( !opt_benchmark )
{
if ( !short_url )
@@ -3636,7 +3661,7 @@ int main(int argc, char *argv[])
}
// All options must be set before starting the gate
if ( !register_algo_gate( opt_algo, &algo_gate ) ) exit(1);
// if ( !register_algo_gate( opt_algo, &algo_gate ) ) exit(1);
if ( coinbase_address )
{
@@ -3655,7 +3680,7 @@ int main(int argc, char *argv[])
memcpy( &five_min_start, &last_submit_time, sizeof (struct timeval) );
memcpy( &session_start, &last_submit_time, sizeof (struct timeval) );
if ( !check_cpu_capability() ) exit(1);
// if ( !check_cpu_capability() ) exit(1);
pthread_mutex_init( &stats_lock, NULL );
pthread_rwlock_init( &g_work_lock, NULL );

29
miner.h
View File

@@ -457,9 +457,6 @@ bool stratum_subscribe(struct stratum_ctx *sctx);
bool stratum_authorize(struct stratum_ctx *sctx, const char *user, const char *pass);
bool stratum_handle_method(struct stratum_ctx *sctx, const char *s);
extern bool lowdiff_debug;
extern bool aes_ni_supported;
extern char *rpc_user;
@@ -549,7 +546,7 @@ enum algos {
ALGO_LYRA2REV3,
ALGO_LYRA2Z,
ALGO_LYRA2Z330,
ALGO_M7M,
ALGO_M7M,
ALGO_MINOTAUR,
ALGO_MYR_GR,
ALGO_NEOSCRYPT,
@@ -576,6 +573,7 @@ enum algos {
ALGO_TRIBUS,
ALGO_VANILLA,
ALGO_VELTOR,
ALGO_VERTHASH,
ALGO_WHIRLPOOL,
ALGO_WHIRLPOOLX,
ALGO_X11,
@@ -643,7 +641,7 @@ static const char* const algo_names[] = {
"lyra2z330",
"m7m",
"minotaur",
"myr-gr",
"myr-gr",
"neoscrypt",
"nist5",
"pentablake",
@@ -668,6 +666,7 @@ static const char* const algo_names[] = {
"tribus",
"vanilla",
"veltor",
"verthash",
"whirlpool",
"whirlpoolx",
"x11",
@@ -738,7 +737,6 @@ extern uint32_t opt_work_size;
extern double *thr_hashrates;
extern double global_hashrate;
extern double stratum_diff;
extern bool opt_reset_on_stale;
extern double net_diff;
extern double net_hashrate;
extern int opt_param_n;
@@ -763,6 +761,8 @@ extern pthread_mutex_t stats_lock;
extern bool opt_sapling;
extern const int pk_buffer_size_max;
extern int pk_buffer_size;
extern char *opt_data_file;
extern bool opt_verify;
static char const usage[] = "\
Usage: cpuminer [OPTIONS]\n\
@@ -771,7 +771,7 @@ Options:\n\
allium Garlicoin (GRLC)\n\
anime Animecoin (ANI)\n\
argon2 Argon2 Coin (AR2)\n\
argon2d250 argon2d-crds, Credits (CRDS)\n\
argon2d250\n\
argon2d500 argon2d-dyn, Dynamic (DYN)\n\
argon2d4096 argon2d-uis, Unitus (UIS)\n\
axiom Shabal-256 MemoHash\n\
@@ -796,13 +796,13 @@ Options:\n\
lyra2h Hppcoin\n\
lyra2re lyra2\n\
lyra2rev2 lyrav2\n\
lyra2rev3 lyrav2v3, Vertcoin\n\
lyra2rev3 lyrav2v3\n\
lyra2z\n\
lyra2z330 Lyra2 330 rows\n\
m7m Magi (XMG)\n\
myr-gr Myriad-Groestl\n\
minotaur Ringcoin (RNG)\n\
neoscrypt NeoScrypt(128, 2, 1)\n\
neoscrypt NeoScrypt(128, 2, 1)\n\
nist5 Nist5\n\
pentablake 5 x blake512\n\
phi1612 phi\n\
@@ -816,7 +816,7 @@ Options:\n\
sha256d Double SHA-256\n\
sha256q Quad SHA-256, Pyrite (PYE)\n\
sha256t Triple SHA-256, Onecoin (OC)\n\
sha3d Double Keccak256 (BSHA3)\n\
sha3d Double Keccak256 (BSHA3)\n\
shavite3 Shavite3\n\
skein Skein+Sha (Skeincoin)\n\
skein2 Double Skein (Woodcoin)\n\
@@ -827,6 +827,7 @@ Options:\n\
tribus Denarius (DNR)\n\
vanilla blake256r8vnl (VCash)\n\
veltor\n\
verthash\n\
whirlpool\n\
whirlpoolx\n\
x11 Dash\n\
@@ -875,7 +876,6 @@ Options:\n\
-s, --scantime=N upper bound on time spent scanning current work when\n\
long polling is unavailable, in seconds (default: 5)\n\
--randomize Randomize scan range start to reduce duplicates\n\
--reset-on-stale Workaround reset stratum if too many stale shares\n\
-f, --diff-factor Divide req. difficulty by this factor (std is 1.0)\n\
-m, --diff-multiplier Multiply difficulty by this factor (std is 1.0)\n\
--hash-meter Display thread hash rates\n\
@@ -900,12 +900,14 @@ Options:\n\
--benchmark run in offline benchmark mode\n\
--cpu-affinity set process affinity to cpu core(s), mask 0x3 for cores 0 and 1\n\
--cpu-priority set process priority (default: 0 idle, 2 normal to 5 highest)\n\
-b, --api-bind IP/Port for the miner API (default: 127.0.0.1:4048)\n\
-b, --api-bind=address[:port] IP address for the miner API, default port is 4048)\n\
--api-remote Allow remote control\n\
--max-temp=N Only mine if cpu temp is less than specified value (linux)\n\
--max-rate=N[KMG] Only mine if net hashrate is less than specified value\n\
--max-diff=N Only mine if net difficulty is less than specified value\n\
-c, --config=FILE load a JSON-format configuration file\n\
--data-file path and name of data file\n\
--verify enable additional time consuming start up tests\n\
-V, --version display version information and exit\n\
-h, --help display this help text and exit\n\
";
@@ -963,7 +965,6 @@ static struct option const options[] = {
{ "retries", 1, NULL, 'r' },
{ "retry-pause", 1, NULL, 1025 },
{ "randomize", 0, NULL, 1024 },
{ "reset-on-stale", 0, NULL, 1026 },
{ "scantime", 1, NULL, 's' },
#ifdef HAVE_SYSLOG_H
{ "syslog", 0, NULL, 'S' },
@@ -974,6 +975,8 @@ static struct option const options[] = {
{ "url", 1, NULL, 'o' },
{ "user", 1, NULL, 'u' },
{ "userpass", 1, NULL, 'O' },
{ "data-file", 1, NULL, 1027 },
{ "verify", 0, NULL, 1028 },
{ "version", 0, NULL, 'V' },
{ 0, 0, 0, 0 }
};

View File

@@ -131,7 +131,7 @@
// If a sequence of constants is to be used it can be more efficient to
// use arithmetic with already existing constants to generate new ones.
//
// ex: const __m512i one = _mm512_const1_64( 1 );
// ex: const __m512i one = m512_one_64;
// const __m512i two = _mm512_add_epi64( one, one );
//
//////////////////////////////////////////////////////////////////////////

View File

@@ -1225,37 +1225,6 @@ static inline void intrlv_4x64( void *dst, const void *src0,
d[31] = _mm_unpackhi_epi64( s2[7], s3[7] );
}
/*
static inline void intrlv_4x64( void *dst, void *src0,
void *src1, void *src2, void *src3, int bit_len )
{
uint64_t *d = (uint64_t*)dst;
uint64_t *s0 = (uint64_t*)src0;
uint64_t *s1 = (uint64_t*)src1;
uint64_t *s2 = (uint64_t*)src2;
uint64_t *s3 = (uint64_t*)src3;
d[ 0] = s0[ 0]; d[ 1] = s1[ 0]; d[ 2] = s2[ 0]; d[ 3] = s3[ 0];
d[ 4] = s0[ 1]; d[ 5] = s1[ 1]; d[ 6] = s2[ 1]; d[ 7] = s3[ 1];
d[ 8] = s0[ 2]; d[ 9] = s1[ 2]; d[ 10] = s2[ 2]; d[ 11] = s3[ 2];
d[ 12] = s0[ 3]; d[ 13] = s1[ 3]; d[ 14] = s2[ 3]; d[ 15] = s3[ 3];
if ( bit_len <= 256 ) return;
d[ 16] = s0[ 4]; d[ 17] = s1[ 4]; d[ 18] = s2[ 4]; d[ 19] = s3[ 4];
d[ 20] = s0[ 5]; d[ 21] = s1[ 5]; d[ 22] = s2[ 5]; d[ 23] = s3[ 5];
d[ 24] = s0[ 6]; d[ 25] = s1[ 6]; d[ 26] = s2[ 6]; d[ 27] = s3[ 6];
d[ 28] = s0[ 7]; d[ 29] = s1[ 7]; d[ 30] = s2[ 7]; d[ 31] = s3[ 7];
if ( bit_len <= 512 ) return;
d[ 32] = s0[ 8]; d[ 33] = s1[ 8]; d[ 34] = s2[ 8]; d[ 35] = s3[ 8];
d[ 36] = s0[ 9]; d[ 37] = s1[ 9]; d[ 38] = s2[ 9]; d[ 39] = s3[ 9];
if ( bit_len <= 640 ) return;
d[ 40] = s0[10]; d[ 41] = s1[10]; d[ 42] = s2[10]; d[ 43] = s3[10];
d[ 44] = s0[11]; d[ 45] = s1[11]; d[ 46] = s2[11]; d[ 47] = s3[11];
d[ 48] = s0[12]; d[ 49] = s1[12]; d[ 50] = s2[12]; d[ 51] = s3[12];
d[ 52] = s0[13]; d[ 53] = s1[13]; d[ 54] = s2[13]; d[ 55] = s3[13];
d[ 56] = s0[14]; d[ 57] = s1[14]; d[ 58] = s2[14]; d[ 59] = s3[14];
d[ 60] = s0[15]; d[ 61] = s1[15]; d[ 62] = s2[15]; d[ 63] = s3[15];
}
*/
static inline void intrlv_4x64_512( void *dst, const void *src0,
const void *src1, const void *src2, const void *src3 )
{
@@ -1282,26 +1251,6 @@ static inline void intrlv_4x64_512( void *dst, const void *src0,
d[15] = _mm_unpackhi_epi64( s2[3], s3[3] );
}
/*
static inline void intrlv_4x64_512( void *dst, const void *src0,
const void *src1, const void *src2, const void *src3 )
{
uint64_t *d = (uint64_t*)dst;
const uint64_t *s0 = (const uint64_t*)src0;
const uint64_t *s1 = (const uint64_t*)src1;
const uint64_t *s2 = (const uint64_t*)src2;
const uint64_t *s3 = (const uint64_t*)src3;
d[ 0] = s0[ 0]; d[ 1] = s1[ 0]; d[ 2] = s2[ 0]; d[ 3] = s3[ 0];
d[ 4] = s0[ 1]; d[ 5] = s1[ 1]; d[ 6] = s2[ 1]; d[ 7] = s3[ 1];
d[ 8] = s0[ 2]; d[ 9] = s1[ 2]; d[ 10] = s2[ 2]; d[ 11] = s3[ 2];
d[ 12] = s0[ 3]; d[ 13] = s1[ 3]; d[ 14] = s2[ 3]; d[ 15] = s3[ 3];
d[ 16] = s0[ 4]; d[ 17] = s1[ 4]; d[ 18] = s2[ 4]; d[ 19] = s3[ 4];
d[ 20] = s0[ 5]; d[ 21] = s1[ 5]; d[ 22] = s2[ 5]; d[ 23] = s3[ 5];
d[ 24] = s0[ 6]; d[ 25] = s1[ 6]; d[ 26] = s2[ 6]; d[ 27] = s3[ 6];
d[ 28] = s0[ 7]; d[ 29] = s1[ 7]; d[ 30] = s2[ 7]; d[ 31] = s3[ 7];
}
*/
static inline void dintrlv_4x64( void *dst0, void *dst1, void *dst2,
void *dst3, const void *src, const int bit_len )
{
@@ -1347,38 +1296,6 @@ static inline void dintrlv_4x64( void *dst0, void *dst1, void *dst2,
d3[7] = _mm_unpackhi_epi64( s[29], s[31] );
}
/*
static inline void dintrlv_4x64( void *dst0, void *dst1, void *dst2,
void *dst3, const void *src, int bit_len )
{
uint64_t *d0 = (uint64_t*)dst0;
uint64_t *d1 = (uint64_t*)dst1;
uint64_t *d2 = (uint64_t*)dst2;
uint64_t *d3 = (uint64_t*)dst3;
const uint64_t *s = (const uint64_t*)src;
d0[ 0] = s[ 0]; d1[ 0] = s[ 1]; d2[ 0] = s[ 2]; d3[ 0] = s[ 3];
d0[ 1] = s[ 4]; d1[ 1] = s[ 5]; d2[ 1] = s[ 6]; d3[ 1] = s[ 7];
d0[ 2] = s[ 8]; d1[ 2] = s[ 9]; d2[ 2] = s[10]; d3[ 2] = s[11];
d0[ 3] = s[12]; d1[ 3] = s[13]; d2[ 3] = s[14]; d3[ 3] = s[15];
if ( bit_len <= 256 ) return;
d0[ 4] = s[16]; d1[ 4] = s[17]; d2[ 4] = s[18]; d3[ 4] = s[19];
d0[ 5] = s[20]; d1[ 5] = s[21]; d2[ 5] = s[22]; d3[ 5] = s[23];
d0[ 6] = s[24]; d1[ 6] = s[25]; d2[ 6] = s[26]; d3[ 6] = s[27];
d0[ 7] = s[28]; d1[ 7] = s[29]; d2[ 7] = s[30]; d3[ 7] = s[31];
if ( bit_len <= 512 ) return;
d0[ 8] = s[32]; d1[ 8] = s[33]; d2[ 8] = s[34]; d3[ 8] = s[35];
d0[ 9] = s[36]; d1[ 9] = s[37]; d2[ 9] = s[38]; d3[ 9] = s[39];
if ( bit_len <= 640 ) return;
d0[10] = s[40]; d1[10] = s[41]; d2[10] = s[42]; d3[10] = s[43];
d0[11] = s[44]; d1[11] = s[45]; d2[11] = s[46]; d3[11] = s[47];
d0[12] = s[48]; d1[12] = s[49]; d2[12] = s[50]; d3[12] = s[51];
d0[13] = s[52]; d1[13] = s[53]; d2[13] = s[54]; d3[13] = s[55];
d0[14] = s[56]; d1[14] = s[57]; d2[14] = s[58]; d3[14] = s[59];
d0[15] = s[60]; d1[15] = s[61]; d2[15] = s[62]; d3[15] = s[63];
}
*/
static inline void dintrlv_4x64_512( void *dst0, void *dst1, void *dst2,
void *dst3, const void *src )
{
@@ -1405,26 +1322,6 @@ static inline void dintrlv_4x64_512( void *dst0, void *dst1, void *dst2,
d3[3] = _mm_unpackhi_epi64( s[13], s[15] );
}
/*
static inline void dintrlv_4x64_512( void *dst0, void *dst1, void *dst2,
void *dst3, const void *src )
{
uint64_t *d0 = (uint64_t*)dst0;
uint64_t *d1 = (uint64_t*)dst1;
uint64_t *d2 = (uint64_t*)dst2;
uint64_t *d3 = (uint64_t*)dst3;
const uint64_t *s = (const uint64_t*)src;
d0[ 0] = s[ 0]; d1[ 0] = s[ 1]; d2[ 0] = s[ 2]; d3[ 0] = s[ 3];
d0[ 1] = s[ 4]; d1[ 1] = s[ 5]; d2[ 1] = s[ 6]; d3[ 1] = s[ 7];
d0[ 2] = s[ 8]; d1[ 2] = s[ 9]; d2[ 2] = s[10]; d3[ 2] = s[11];
d0[ 3] = s[12]; d1[ 3] = s[13]; d2[ 3] = s[14]; d3[ 3] = s[15];
d0[ 4] = s[16]; d1[ 4] = s[17]; d2[ 4] = s[18]; d3[ 4] = s[19];
d0[ 5] = s[20]; d1[ 5] = s[21]; d2[ 5] = s[22]; d3[ 5] = s[23];
d0[ 6] = s[24]; d1[ 6] = s[25]; d2[ 6] = s[26]; d3[ 6] = s[27];
d0[ 7] = s[28]; d1[ 7] = s[29]; d2[ 7] = s[30]; d3[ 7] = s[31];
}
*/
static inline void extr_lane_4x64( void *d, const void *s,
const int lane, const int bit_len )
{
@@ -1440,9 +1337,41 @@ static inline void extr_lane_4x64( void *d, const void *s,
}
#if defined(__AVX2__)
// Doesn't really need AVX2, just SSSE3, but is only used with AVX2 code.
// There a alignment problems with the source buffer on Wwindows,
// can't use 256 bit bswap.
static inline void mm256_intrlv80_4x64( void *d, const void *src )
{
__m128i s0 = casti_m128i( src,0 );
__m128i s1 = casti_m128i( src,1 );
__m128i s2 = casti_m128i( src,2 );
__m128i s3 = casti_m128i( src,3 );
__m128i s4 = casti_m128i( src,4 );
casti_m128i( d, 0 ) =
casti_m128i( d, 1 ) = _mm_shuffle_epi32( s0, 0x44 );
casti_m128i( d, 2 ) =
casti_m128i( d, 3 ) = _mm_shuffle_epi32( s0, 0xee );
casti_m128i( d, 4 ) =
casti_m128i( d, 5 ) = _mm_shuffle_epi32( s1, 0x44 );
casti_m128i( d, 6 ) =
casti_m128i( d, 7 ) = _mm_shuffle_epi32( s1, 0xee );
casti_m128i( d, 8 ) =
casti_m128i( d, 9 ) = _mm_shuffle_epi32( s2, 0x44 );
casti_m128i( d, 10 ) =
casti_m128i( d, 11 ) = _mm_shuffle_epi32( s2, 0xee );
casti_m128i( d, 12 ) =
casti_m128i( d, 13 ) = _mm_shuffle_epi32( s3, 0x44 );
casti_m128i( d, 14 ) =
casti_m128i( d, 15 ) = _mm_shuffle_epi32( s3, 0xee );
casti_m128i( d, 16 ) =
casti_m128i( d, 17 ) = _mm_shuffle_epi32( s4, 0x44 );
casti_m128i( d, 18 ) =
casti_m128i( d, 19 ) = _mm_shuffle_epi32( s4, 0xee );
}
static inline void mm256_bswap32_intrlv80_4x64( void *d, const void *src )
{
@@ -1636,40 +1565,6 @@ static inline void intrlv_8x64_512( void *dst, const void *src0,
d[31] = _mm_unpackhi_epi64( s6[3], s7[3] );
}
/*
#define ILEAVE_8x64( i ) do \
{ \
uint64_t *d = (uint64_t*)(dst) + ( (i) << 3 ); \
d[0] = *( (const uint64_t*)(s0) +(i) ); \
d[1] = *( (const uint64_t*)(s1) +(i) ); \
d[2] = *( (const uint64_t*)(s2) +(i) ); \
d[3] = *( (const uint64_t*)(s3) +(i) ); \
d[4] = *( (const uint64_t*)(s4) +(i) ); \
d[5] = *( (const uint64_t*)(s5) +(i) ); \
d[6] = *( (const uint64_t*)(s6) +(i) ); \
d[7] = *( (const uint64_t*)(s7) +(i) ); \
} while(0)
static inline void intrlv_8x64( void *dst, const void *s0,
const void *s1, const void *s2, const void *s3, const void *s4,
const void *s5, const void *s6, const void *s7, int bit_len )
{
ILEAVE_8x64( 0 ); ILEAVE_8x64( 1 );
ILEAVE_8x64( 2 ); ILEAVE_8x64( 3 );
if ( bit_len <= 256 ) return;
ILEAVE_8x64( 4 ); ILEAVE_8x64( 5 );
ILEAVE_8x64( 6 ); ILEAVE_8x64( 7 );
if ( bit_len <= 512 ) return;
ILEAVE_8x64( 8 ); ILEAVE_8x64( 9 );
if ( bit_len <= 640 ) return;
ILEAVE_8x64( 10 ); ILEAVE_8x64( 11 );
ILEAVE_8x64( 12 ); ILEAVE_8x64( 13 );
ILEAVE_8x64( 14 ); ILEAVE_8x64( 15 );
}
#undef ILEAVE_8x64
*/
static inline void dintrlv_8x64( void *dst0, void *dst1, void *dst2,
void *dst3, void *dst4, void *dst5, void *dst6, void *dst7,
@@ -1815,39 +1710,6 @@ static inline void dintrlv_8x64_512( void *dst0, void *dst1, void *dst2,
d7[3] = _mm_unpackhi_epi64( s[27], s[31] );
}
/*
#define DLEAVE_8x64( i ) do \
{ \
const uint64_t *s = (const uint64_t*)(src) + ( (i) << 3 ); \
*( (uint64_t*)(d0) +(i) ) = s[0]; \
*( (uint64_t*)(d1) +(i) ) = s[1]; \
*( (uint64_t*)(d2) +(i) ) = s[2]; \
*( (uint64_t*)(d3) +(i) ) = s[3]; \
*( (uint64_t*)(d4) +(i) ) = s[4]; \
*( (uint64_t*)(d5) +(i) ) = s[5]; \
*( (uint64_t*)(d6) +(i) ) = s[6]; \
*( (uint64_t*)(d7) +(i) ) = s[7]; \
} while(0)
static inline void dintrlv_8x64( void *d0, void *d1, void *d2, void *d3,
void *d4, void *d5, void *d6, void *d7, const void *src, int bit_len )
{
DLEAVE_8x64( 0 ); DLEAVE_8x64( 1 );
DLEAVE_8x64( 2 ); DLEAVE_8x64( 3 );
if ( bit_len <= 256 ) return;
DLEAVE_8x64( 4 ); DLEAVE_8x64( 5 );
DLEAVE_8x64( 6 ); DLEAVE_8x64( 7 );
if ( bit_len <= 512 ) return;
DLEAVE_8x64( 8 ); DLEAVE_8x64( 9 );
if ( bit_len <= 640 ) return;
DLEAVE_8x64( 10 ); DLEAVE_8x64( 11 );
DLEAVE_8x64( 12 ); DLEAVE_8x64( 13 );
DLEAVE_8x64( 14 ); DLEAVE_8x64( 15 );
}
#undef DLEAVE_8x64
*/
static inline void extr_lane_8x64( void *d, const void *s,
const int lane, const int bit_len )
{

View File

@@ -27,13 +27,15 @@
// All of the utilities here assume all data is in registers except
// in rare cases where arguments are pointers.
//
// Some constants are generated using a memory overlay on the stack.
//
// Intrinsics automatically promote from REX to VEX when AVX is available
// but ASM needs to be done manually.
//
///////////////////////////////////////////////////////////////////////////
// Efficient and convenient moving bwtween GP & low bits of XMM.
// Efficient and convenient moving between GP & low bits of XMM.
// Use VEX when available to give access to xmm8-15 and zero extend for
// larger vectors.
@@ -81,6 +83,23 @@ static inline uint32_t mm128_mov128_32( const __m128i a )
return n;
}
// Equivalent of set1, broadcast integer to all elements.
#define m128_const_i128( i ) mm128_mov64_128( i )
#define m128_const1_64( i ) _mm_shuffle_epi32( mm128_mov64_128( i ), 0x44 )
#define m128_const1_32( i ) _mm_shuffle_epi32( mm128_mov32_128( i ), 0x00 )
#if defined(__SSE4_1__)
// Assign 64 bit integers to respective elements: {hi, lo}
#define m128_const_64( hi, lo ) \
_mm_insert_epi64( mm128_mov64_128( lo ), hi, 1 )
#else // No insert in SSE2
#define m128_const_64 _mm_set_epi64x
#endif
// Pseudo constants
#define m128_zero _mm_setzero_si128()
@@ -107,44 +126,65 @@ static inline __m128i mm128_neg1_fn()
}
#define m128_neg1 mm128_neg1_fn()
// const functions work best when arguments are immediate constants or
// are known to be in registers. If data needs to loaded from memory or cache
// use set.
// Equivalent of set1, broadcast 64 bit integer to all elements.
#define m128_const1_64( i ) _mm_shuffle_epi32( mm128_mov64_128( i ), 0x44 )
#define m128_const1_32( i ) _mm_shuffle_epi32( mm128_mov32_128( i ), 0x00 )
#if defined(__SSE4_1__)
// Assign 64 bit integers to respective elements: {hi, lo}
#define m128_const_64( hi, lo ) \
_mm_insert_epi64( mm128_mov64_128( lo ), hi, 1 )
/////////////////////////////
//
// _mm_insert_ps( _mm128i v1, __m128i v2, imm8 c )
//
// Fast and powerful but very limited in its application.
// It requires SSE4.1 but only works with 128 bit vectors with 32 bit
// elements. There is no equivalent instruction for 256 bit or 512 bit vectors.
// There's no integer version. There's no 64 bit, 16 bit or byte element
// sizing. It's unique.
//
// It can:
// - zero 32 bit elements of a 128 bit vector.
// - extract any 32 bit element from one 128 bit vector and insert the
// data to any 32 bit element of another 128 bit vector, or the same vector.
// - do both simultaneoulsly.
//
// It can be used as a more efficient replacement for _mm_insert_epi32
// or _mm_extract_epi32.
//
// Control byte definition:
// c[3:0] zero mask
// c[5:4] destination element selector
// c[7:6] source element selector
#else // No insert in SSE2
// Convert type and abbreviate name: e"x"tract "i"nsert "m"ask
#define mm128_xim_32( v1, v2, c ) \
_mm_castps_si128( _mm_insert_ps( _mm_castsi128_ps( v1 ), \
_mm_castsi128_ps( v2 ), c ) )
#define m128_const_64 _mm_set_epi64x
// Some examples of simple operations:
#endif
// Insert 32 bit integer into v at element c and return modified v.
static inline __m128i mm128_insert_32( const __m128i v, const uint32_t i,
const int c )
{ return mm128_xim_32( v, mm128_mov32_128( i ), c<<4 ); }
// Extract 32 bit element c from v and return as integer.
static inline uint32_t mm128_extract_32( const __m128i v, const int c )
{ return mm128_mov128_32( mm128_xim_32( v, v, c<<6 ) ); }
// Clear (zero) 32 bit elements based on bits set in 4 bit mask.
static inline __m128i mm128_mask_32( const __m128i v, const int m )
{ return mm128_xim_32( v, v, m ); }
#endif // SSE4_1
//
// Basic operations without equivalent SIMD intrinsic
// Bitwise not (~v)
#define mm128_not( v ) _mm_xor_si128( (v), m128_neg1 )
#define mm128_not( v ) _mm_xor_si128( v, m128_neg1 )
// Unary negation of elements (-v)
#define mm128_negate_64( v ) _mm_sub_epi64( m128_zero, v )
#define mm128_negate_32( v ) _mm_sub_epi32( m128_zero, v )
#define mm128_negate_16( v ) _mm_sub_epi16( m128_zero, v )
// Clear (zero) 32 bit elements based on bits set in 4 bit mask.
// Fast, avoids using vector mask, but only available for 128 bit vectors.
#define mm128_mask_32( a, mask ) \
_mm_castps_si128( _mm_insert_ps( _mm_castsi128_ps( a ), \
_mm_castsi128_ps( a ), mask ) )
// Add 4 values, fewer dependencies than sequential addition.
#define mm128_add4_64( a, b, c, d ) \
@@ -162,27 +202,6 @@ static inline __m128i mm128_neg1_fn()
#define mm128_xor4( a, b, c, d ) \
_mm_xor_si128( _mm_xor_si128( a, b ), _mm_xor_si128( c, d ) )
// Horizontal vector testing
#if defined(__SSE4_1__)
#define mm128_allbits0( a ) _mm_testz_si128( a, a )
#define mm128_allbits1( a ) _mm_testc_si128( a, m128_neg1 )
// probably broken, avx2 is
//#define mm128_allbitsne( a ) _mm_testnzc_si128( a, m128_neg1 )
#define mm128_anybits0( a ) mm128_allbits1( a )
#define mm128_anybits1( a ) mm128_allbits0( a )
#else // SSE2
// Bit-wise test of entire vector, useful to test results of cmp.
#define mm128_anybits0( a ) (uint128_t)(a)
#define mm128_anybits1( a ) (((uint128_t)(a))+1)
#define mm128_allbits0( a ) ( !mm128_anybits1(a) )
#define mm128_allbits1( a ) ( !mm128_anybits0(a) )
#endif // SSE4.1 else SSE2
//
// Vector pointer cast
@@ -204,11 +223,6 @@ static inline __m128i mm128_neg1_fn()
#define casto_m128i(p,o) (((__m128i*)(p))+(o))
// Memory functions
// Mostly for convenience, avoids calculating bytes.
// Assumes data is alinged and integral.
// n = number of __m128i, bytes/16
// Memory functions
// Mostly for convenience, avoids calculating bytes.
// Assumes data is alinged and integral.
@@ -249,21 +263,22 @@ static inline void memcpy_128( __m128i *dst, const __m128i *src, const int n )
_mm_or_si128( _mm_slli_epi32( v, c ), _mm_srli_epi32( v, 32-(c) ) )
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
#if defined(__AVX512VL__)
//#if defined(__AVX512F__) && defined(__AVX512VL__)
#define mm128_ror_64 _mm_ror_epi64
#define mm128_rol_64 _mm_rol_epi64
#define mm128_ror_32 _mm_ror_epi32
#define mm128_rol_32 _mm_rol_epi32
#else
#else // SSE2
#define mm128_ror_64 mm128_ror_var_64
#define mm128_rol_64 mm128_rol_var_64
#define mm128_ror_32 mm128_ror_var_32
#define mm128_rol_32 mm128_rol_var_32
#endif // AVX512 else
#endif // AVX512 else SSE2
#define mm128_ror_16( v, c ) \
_mm_or_si128( _mm_srli_epi16( v, c ), _mm_slli_epi16( v, 16-(c) ) )
@@ -277,61 +292,19 @@ static inline void memcpy_128( __m128i *dst, const __m128i *src, const int n )
#define mm128_swap_64( v ) _mm_shuffle_epi32( v, 0x4e )
#define mm128_ror_1x32( v ) _mm_shuffle_epi32( v, 0x39 )
#define mm128_rol_1x32( v ) _mm_shuffle_epi32( v, 0x93 )
//#define mm128_swap_64( v ) _mm_alignr_epi8( v, v, 8 )
//#define mm128_ror_1x32( v ) _mm_alignr_epi8( v, v, 4 )
//#define mm128_rol_1x32( v ) _mm_alignr_epi8( v, v, 12 )
#define mm128_ror_1x16( v ) _mm_alignr_epi8( v, v, 2 )
#define mm128_rol_1x16( v ) _mm_alignr_epi8( v, v, 14 )
#define mm128_ror_1x8( v ) _mm_alignr_epi8( v, v, 1 )
#define mm128_rol_1x8( v ) _mm_alignr_epi8( v, v, 15 )
// Rotate by c bytes
#define mm128_ror_x8( v, c ) _mm_alignr_epi8( v, c )
#define mm128_rol_x8( v, c ) _mm_alignr_epi8( v, 16-(c) )
// Invert vector: {3,2,1,0} -> {0,1,2,3}
#define mm128_invert_32( v ) _mm_shuffle_epi32( v, 0x1b )
// Swap 32 bit elements in 64 bit lanes
#define mm128_swap64_32( v ) _mm_shuffle_epi32( v, 0xb1 )
#if defined(__SSSE3__)
#define mm128_invert_16( v ) \
_mm_shuffle_epi8( v, mm128_const_64( 0x0100030205040706, \
0x09080b0a0d0c0f0e )
#define mm128_invert_8( v ) \
_mm_shuffle_epi8( v, mm128_const_64( 0x0001020304050607, \
0x08090a0b0c0d0e0f )
#endif // SSSE3
//
// Rotate elements within lanes.
#define mm128_swap64_32( v ) _mm_shuffle_epi32( v, 0xb1 )
#define mm128_rol64_8( v, c ) \
_mm_or_si128( _mm_slli_epi64( v, ( ( (c)<<3 ) ), \
_mm_srli_epi64( v, ( ( 64 - ( (c)<<3 ) ) ) )
#define mm128_ror64_8( v, c ) \
_mm_or_si128( _mm_srli_epi64( v, ( ( (c)<<3 ) ), \
_mm_slli_epi64( v, ( ( 64 - ( (c)<<3 ) ) ) )
#define mm128_rol32_8( v, c ) \
_mm_or_si128( _mm_slli_epi32( v, ( ( (c)<<3 ) ), \
_mm_srli_epi32( v, ( ( 32 - ( (c)<<3 ) ) ) )
#define mm128_ror32_8( v, c ) \
_mm_or_si128( _mm_srli_epi32( v, ( ( (c)<<3 ) ), \
_mm_slli_epi32( v, ( ( 32 - ( (c)<<3 ) ) ) )
// Rotate right by c bytes, no SSE2 equivalent.
static inline __m128i mm128_ror_x8( const __m128i v, const int c )
{ return _mm_alignr_epi8( v, v, c ); }
//
// Endian byte swap.
#if defined(__SSSE3__)
#define mm128_bswap_64( v ) \
_mm_shuffle_epi8( v, m128_const_64( 0x08090a0b0c0d0e0f, \
0x0001020304050607 ) )
@@ -374,7 +347,6 @@ static inline void memcpy_128( __m128i *dst, const __m128i *src, const int n )
#else // SSE2
// Use inline function instead of macro due to multiple statements.
static inline __m128i mm128_bswap_64( __m128i v )
{
v = _mm_or_si128( _mm_slli_epi16( v, 8 ), _mm_srli_epi16( v, 8 ) );

View File

@@ -15,33 +15,35 @@
// is available.
// Move integer to low element of vector, other elements are set to zero.
#define mm256_mov64_256( i ) _mm256_castsi128_si256( mm128_mov64_128( i ) )
#define mm256_mov32_256( i ) _mm256_castsi128_si256( mm128_mov32_128( i ) )
#define mm256_mov64_256( n ) _mm256_castsi128_si256( mm128_mov64_128( n ) )
#define mm256_mov32_256( n ) _mm256_castsi128_si256( mm128_mov32_128( n ) )
#define mm256_mov256_64( a ) mm128_mov128_64( _mm256_castsi256_si128( a ) )
#define mm256_mov256_32( a ) mm128_mov128_32( _mm256_castsi256_si128( a ) )
// Move low element of vector to integer.
#define mm256_mov256_64( v ) mm128_mov128_64( _mm256_castsi256_si128( v ) )
#define mm256_mov256_32( v ) mm128_mov128_32( _mm256_castsi256_si128( v ) )
// concatenate two 128 bit vectors into one 256 bit vector: { hi, lo }
#define mm256_concat_128( hi, lo ) \
_mm256_inserti128_si256( _mm256_castsi128_si256( lo ), hi, 1 )
// Equavalent of set, move 64 bit integer constants to respective 64 bit
// Equivalent of set, move 64 bit integer constants to respective 64 bit
// elements.
static inline __m256i m256_const_64( const uint64_t i3, const uint64_t i2,
const uint64_t i1, const uint64_t i0 )
{
__m128i hi, lo;
lo = mm128_mov64_128( i0 );
hi = mm128_mov64_128( i2 );
lo = _mm_insert_epi64( lo, i1, 1 );
hi = _mm_insert_epi64( hi, i3, 1 );
return mm256_concat_128( hi, lo );
union { __m256i m256i;
uint64_t u64[4]; } v;
v.u64[0] = i0; v.u64[1] = i1; v.u64[2] = i2; v.u64[3] = i3;
return v.m256i;
}
// Equivalent of set1, broadcast integer constant to all elements.
#define m256_const1_128( v ) _mm256_broadcastsi128_si256( v )
// Equivalent of set1.
// 128 bit vector argument
#define m256_const1_128( v ) \
_mm256_permute4x64_epi64( _mm256_castsi128_si256( v ), 0x44 )
// 64 bit integer argument zero extended to 128 bits.
#define m256_const1_i128( i ) m256_const1_128( mm128_mov64_128( i ) )
#define m256_const1_64( i ) _mm256_broadcastq_epi64( mm128_mov64_128( i ) )
#define m256_const1_32( i ) _mm256_broadcastd_epi32( mm128_mov32_128( i ) )
#define m256_const1_16( i ) _mm256_broadcastw_epi16( mm128_mov32_128( i ) )
@@ -50,119 +52,29 @@ static inline __m256i m256_const_64( const uint64_t i3, const uint64_t i2,
#define m256_const2_64( i1, i0 ) \
m256_const1_128( m128_const_64( i1, i0 ) )
#define m126_const2_32( i1, i0 ) \
m256_const1_64( ( (uint64_t)(i1) << 32 ) | ( (uint64_t)(i0) & 0xffffffff ) )
//
// All SIMD constant macros are actually functions containing executable
// code and therefore can't be used as compile time initializers.
#define m256_zero _mm256_setzero_si256()
#define m256_one_256 mm256_mov64_256( 1 )
#define m256_one_128 \
_mm256_permute4x64_epi64( _mm256_castsi128_si256( \
mm128_mov64_128( 1 ) ), 0x44 )
#define m256_one_64 _mm256_broadcastq_epi64( mm128_mov64_128( 1 ) )
#define m256_one_32 _mm256_broadcastd_epi32( mm128_mov64_128( 1 ) )
#define m256_one_16 _mm256_broadcastw_epi16( mm128_mov64_128( 1 ) )
#define m256_one_8 _mm256_broadcastb_epi8 ( mm128_mov64_128( 1 ) )
#define m256_zero _mm256_setzero_si256()
#define m256_one_256 mm256_mov64_256( 1 )
#define m256_one_128 m256_const1_i128( 1 )
#define m256_one_64 _mm256_broadcastq_epi64( mm128_mov64_128( 1 ) )
#define m256_one_32 _mm256_broadcastd_epi32( mm128_mov64_128( 1 ) )
#define m256_one_16 _mm256_broadcastw_epi16( mm128_mov64_128( 1 ) )
#define m256_one_8 _mm256_broadcastb_epi8 ( mm128_mov64_128( 1 ) )
static inline __m256i mm256_neg1_fn()
{
__m256i a;
asm( "vpcmpeqq %0, %0, %0\n\t" : "=x"(a) );
return a;
__m256i v;
asm( "vpcmpeqq %0, %0, %0\n\t" : "=x"(v) );
return v;
}
#define m256_neg1 mm256_neg1_fn()
//
// Vector size conversion.
//
// Allows operations on either or both halves of a 256 bit vector serially.
// Handy for parallel AES.
// Caveats when writing:
// _mm256_castsi256_si128 is free and without side effects.
// _mm256_castsi128_si256 is also free but leaves the high half
// undefined. That's ok if the hi half will be subseqnently assigned.
// If assigning both, do lo first, If assigning only 1, use
// _mm256_inserti128_si256.
//
#define mm128_extr_lo128_256( a ) _mm256_castsi256_si128( a )
#define mm128_extr_hi128_256( a ) _mm256_extracti128_si256( a, 1 )
// Extract integers from 256 bit vector, ineficient, avoid if possible..
#define mm256_extr_4x64( a3, a2, a1, a0, src ) \
do { \
__m128i hi = _mm256_extracti128_si256( src, 1 ); \
a0 = mm128_mov128_64( _mm256_castsi256_si128( src) ); \
a1 = _mm_extract_epi64( _mm256_castsi256_si128( src ), 1 ); \
a2 = mm128_mov128_64( hi ); \
a3 = _mm_extract_epi64( hi, 1 ); \
} while(0)
#define mm256_extr_8x32( a7, a6, a5, a4, a3, a2, a1, a0, src ) \
do { \
uint64_t t = _mm_extract_epi64( _mm256_castsi256_si128( src ), 1 ); \
__m128i hi = _mm256_extracti128_si256( src, 1 ); \
a0 = mm256_mov256_32( src ); \
a1 = _mm_extract_epi32( _mm256_castsi256_si128( src ), 1 ); \
a2 = (uint32_t)( t ); \
a3 = (uint32_t)( t<<32 ); \
t = _mm_extract_epi64( hi, 1 ); \
a4 = mm128_mov128_32( hi ); \
a5 = _mm_extract_epi32( hi, 1 ); \
a6 = (uint32_t)( t ); \
a7 = (uint32_t)( t<<32 ); \
} while(0)
// Bytewise test of all 256 bits
#define mm256_all0_8( a ) \
( _mm256_movemask_epi8( a ) == 0 )
#define mm256_all1_8( a ) \
( _mm256_movemask_epi8( a ) == -1 )
#define mm256_anybits0( a ) \
( _mm256_movemask_epi8( a ) & 0xffffffff )
#define mm256_anybits1( a ) \
( ( _mm256_movemask_epi8( a ) & 0xffffffff ) != 0xffffffff )
// Bitwise test of all 256 bits
#define mm256_allbits0( a ) _mm256_testc_si256( a, m256_neg1 )
#define mm256_allbits1( a ) _mm256_testc_si256( m256_zero, a )
//#define mm256_anybits0( a ) !mm256_allbits1( a )
//#define mm256_anybits1( a ) !mm256_allbits0( a )
// Parallel AES, for when x is expected to be in a 256 bit register.
// Use same 128 bit key.
#if defined(__VAES__)
#define mm256_aesenc_2x128( x, k ) \
_mm256_aesenc_epi128( x, k )
#else
#define mm256_aesenc_2x128( x, k ) \
mm256_concat_128( _mm_aesenc_si128( mm128_extr_hi128_256( x ), k ), \
_mm_aesenc_si128( mm128_extr_lo128_256( x ), k ) )
#endif
#define mm256_paesenc_2x128( y, x, k ) do \
{ \
__m128i *X = (__m128i*)x; \
__m128i *Y = (__m128i*)y; \
Y[0] = _mm_aesenc_si128( X[0], k ); \
Y[1] = _mm_aesenc_si128( X[1], k ); \
} while(0);
// Consistent naming for similar operations.
#define mm128_extr_lo128_256( v ) _mm256_castsi256_si128( v )
#define mm128_extr_hi128_256( v ) _mm256_extracti128_si256( v, 1 )
//
// Pointer casting
@@ -201,13 +113,13 @@ static inline void memcpy_256( __m256i *dst, const __m256i *src, const int n )
//
// Basic operations without SIMD equivalent
// Bitwise not ( ~x )
#define mm256_not( x ) _mm256_xor_si256( (x), m256_neg1 ) \
// Bitwise not ( ~v )
#define mm256_not( v ) _mm256_xor_si256( v, m256_neg1 ) \
// Unary negation of each element ( -a )
#define mm256_negate_64( a ) _mm256_sub_epi64( m256_zero, a )
#define mm256_negate_32( a ) _mm256_sub_epi32( m256_zero, a )
#define mm256_negate_16( a ) _mm256_sub_epi16( m256_zero, a )
// Unary negation of each element ( -v )
#define mm256_negate_64( v ) _mm256_sub_epi64( m256_zero, v )
#define mm256_negate_32( v ) _mm256_sub_epi32( m256_zero, v )
#define mm256_negate_16( v ) _mm256_sub_epi16( m256_zero, v )
// Add 4 values, fewer dependencies than sequential addition.
@@ -256,7 +168,10 @@ static inline void memcpy_256( __m256i *dst, const __m256i *src, const int n )
_mm256_srli_epi32( v, 32-(c) ) )
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
// The spec says both F & VL are required, but just in case AMD
// decides to implement ROL/R without AVX512F.
#if defined(__AVX512VL__)
//#if defined(__AVX512F__) && defined(__AVX512VL__)
// AVX512, control must be 8 bit immediate.
@@ -265,17 +180,14 @@ static inline void memcpy_256( __m256i *dst, const __m256i *src, const int n )
#define mm256_ror_32 _mm256_ror_epi32
#define mm256_rol_32 _mm256_rol_epi32
#else
// No AVX512, use fallback.
#else // AVX2
#define mm256_ror_64 mm256_ror_var_64
#define mm256_rol_64 mm256_rol_var_64
#define mm256_ror_32 mm256_ror_var_32
#define mm256_rol_32 mm256_rol_var_32
#endif // AVX512 else
#endif // AVX512 else AVX2
#define mm256_ror_16( v, c ) \
_mm256_or_si256( _mm256_srli_epi16( v, c ), \
@@ -285,67 +197,10 @@ static inline void memcpy_256( __m256i *dst, const __m256i *src, const int n )
_mm256_or_si256( _mm256_slli_epi16( v, c ), \
_mm256_srli_epi16( v, 16-(c) ) )
// Rotate bits in each element of v by the amount in corresponding element of
// index vector c
#define mm256_rorv_64( v, c ) \
_mm256_or_si256( \
_mm256_srlv_epi64( v, c ), \
_mm256_sllv_epi64( v, _mm256_sub_epi64( \
_mm256_set1_epi64x( 64 ), c ) ) )
#define mm256_rolv_64( v, c ) \
_mm256_or_si256( \
_mm256_sllv_epi64( v, c ), \
_mm256_srlv_epi64( v, _mm256_sub_epi64( \
_mm256_set1_epi64x( 64 ), c ) ) )
#define mm256_rorv_32( v, c ) \
_mm256_or_si256( \
_mm256_srlv_epi32( v, c ), \
_mm256_sllv_epi32( v, _mm256_sub_epi32( \
_mm256_set1_epi32( 32 ), c ) ) )
#define mm256_rolv_32( v, c ) \
_mm256_or_si256( \
_mm256_sllv_epi32( v, c ), \
_mm256_srlv_epi32( v, _mm256_sub_epi32( \
_mm256_set1_epi32( 32 ), c ) ) )
// AVX512 can do 16 bit elements.
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
#define mm256_rorv_16( v, c ) \
_mm256_or_si256( \
_mm256_srlv_epi16( v, _mm256_set1_epi16( c ) ), \
_mm256_sllv_epi16( v, _mm256_set1_epi16( 16-(c) ) ) )
#define mm256_rolv_16( v, c ) \
_mm256_or_si256( \
_mm256_sllv_epi16( v, _mm256_set1_epi16( c ) ), \
_mm256_srlv_epi16( v, _mm256_set1_epi16( 16-(c) ) ) )
#endif // AVX512
//
// Rotate elements accross all lanes.
//
// AVX2 has no full vector permute for elements less than 32 bits.
// AVX512 has finer granularity full vector permutes.
// AVX512 has full vector alignr which might be faster, especially for 32 bit
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
#define mm256_swap_128( v ) _mm256_alignr_epi64( v, v, 2 )
#define mm256_ror_1x64( v ) _mm256_alignr_epi64( v, v, 1 )
#define mm256_rol_1x64( v ) _mm256_alignr_epi64( v, v, 3 )
#define mm256_ror_1x32( v ) _mm256_alignr_epi32( v, v, 1 )
#define mm256_rol_1x32( v ) _mm256_alignr_epi32( v, v, 7 )
#define mm256_ror_3x32( v ) _mm256_alignr_epi32( v, v, 3 )
#define mm256_rol_3x32( v ) _mm256_alignr_epi32( v, v, 5 )
#else // AVX2
// Swap 128 bit elements in 256 bit vector.
#define mm256_swap_128( v ) _mm256_permute4x64_epi64( v, 0x4e )
@@ -353,6 +208,16 @@ static inline void memcpy_256( __m256i *dst, const __m256i *src, const int n )
#define mm256_ror_1x64( v ) _mm256_permute4x64_epi64( v, 0x39 )
#define mm256_rol_1x64( v ) _mm256_permute4x64_epi64( v, 0x93 )
#if defined(__AVX512F__) && defined(__AVX512VL__)
static inline __m256i mm256_ror_1x32( const __m256i v )
{ return _mm256_alignr_epi32( v, v, 1 ); }
static inline __m256i mm256_rol_1x32( const __m256i v )
{ return _mm256_alignr_epi32( v, v, 7 ); }
#else // AVX2
// Rotate 256 bit vector by one 32 bit element.
#define mm256_ror_1x32( v ) \
_mm256_permutevar8x32_epi32( v, \
@@ -364,144 +229,20 @@ static inline void memcpy_256( __m256i *dst, const __m256i *src, const int n )
m256_const_64( 0x0000000600000005, 0x0000000400000003, \
0x0000000200000001, 0x0000000000000007 )
// Rotate 256 bit vector by three 32 bit elements (96 bits).
#define mm256_ror_3x32( v ) \
_mm256_permutevar8x32_epi32( v, \
m256_const_64( 0x0000000200000001, 0x0000000000000007, \
0x0000000600000005, 0x0000000400000003 )
#define mm256_rol_3x32( v ) \
_mm256_permutevar8x32_epi32( v, \
m256_const_64( 0x0000000400000003, 0x0000000200000001, \
0x0000000000000007, 0x0000000600000005 )
#endif // AVX512 else AVX2
// AVX512 can do 16 & 8 bit elements.
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
// Rotate 256 bit vector by one 16 bit element.
#define mm256_ror_1x16( v ) \
_mm256_permutexvar_epi16( m256_const_64( \
0x0000000f000e000d, 0x000c000b000a0009, \
0x0008000700060005, 0x0004000300020001 ), v )
#define mm256_rol_1x16( v ) \
_mm256_permutexvar_epi16( m256_const_64( \
0x000e000d000c000b, 0x000a000900080007, \
0x0006000500040003, 0x000200010000000f ), v )
#if defined (__AVX512VBMI__)
// Rotate 256 bit vector by one byte.
#define mm256_ror_1x8( v ) _mm256_permutexvar_epi8( m256_const_64( \
0x001f1e1d1c1b1a19, 0x1817161514131211, \
0x100f0e0d0c0b0a09, 0x0807060504030201 ), v )
#define mm256_rol_1x8( v ) _mm256_permutexvar_epi16( m256_const_64( \
0x1e1d1c1b1a191817, 0x161514131211100f, \
0x0e0d0c0b0a090807, 0x060504030201001f ), v )
#endif // VBMI
#endif // AVX512
// Invert vector: {3,2,1,0} -> {0,1,2,3}
#define mm256_invert_64 ( v ) _mm256_permute4x64_epi64( v, 0x1b )
#define mm256_invert_32 ( v ) _mm256_permutevar8x32_epi32( v, \
m256_const_64( 0x0000000000000001, 0x0000000200000003 \
0x0000000400000005, 0x0000000600000007 )
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
// Invert vector: {7,6,5,4,3,2,1,0} -> {0,1,2,3,4,5,6,7}
#define mm256_invert_16 ( v ) \
_mm256_permutexvar_epi16( m256_const_64( \
0x0000000100020003, 0x0004000500060007, \
0x00080009000a000b, 0x000c000d000e000f ), v )
#if defined(__AVX512VBMI__)
#define mm256_invert_8( v ) \
_mm256_permutexvar_epi8( m256_const_64( \
0x0001020304050607, 0x08090a0b0c0d0e0f, \
0x1011121314151617, 0x18191a1b1c1d1e1f ), v )
#endif // VBMI
#endif // AVX512
//
// Rotate elements within each 128 bit lane of 256 bit vector.
#define mm256_swap128_64( v ) _mm256_shuffle_epi32( v, 0x4e )
#define mm256_swap128_64( v ) _mm256_shuffle_epi32( v, 0x4e )
#define mm256_ror128_32( v ) _mm256_shuffle_epi32( v, 0x39 )
#define mm256_rol128_32( v ) _mm256_shuffle_epi32( v, 0x93 )
#define mm256_ror128_32( v ) _mm256_shuffle_epi32( v, 0x39 )
#define mm256_rol128_32( v ) _mm256_shuffle_epi32( v, 0x93 )
#define mm256_ror128_x8( v, c ) _mm256_alignr_epi8( v, v, c )
/*
// Rotate each 128 bit lane by c elements.
#define mm256_ror128_8( v, c ) \
_mm256_or_si256( _mm256_bsrli_epi128( v, c ), \
_mm256_bslli_epi128( v, 16-(c) ) )
#define mm256_rol128_8( v, c ) \
_mm256_or_si256( _mm256_bslli_epi128( v, c ), \
_mm256_bsrli_epi128( v, 16-(c) ) )
*/
// Rotate elements in each 64 bit lane
#define mm256_swap64_32( v ) _mm256_shuffle_epi32( v, 0xb1 )
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
#define mm256_rol64_8( v, c ) _mm256_rol_epi64( v, ((c)<<3) )
#define mm256_ror64_8( v, c ) _mm256_ror_epi64( v, ((c)<<3) )
#else
#define mm256_rol64_8( v, c ) \
_mm256_or_si256( _mm256_slli_epi64( v, ( ( (c)<<3 ) ), \
_mm256_srli_epi64( v, ( ( 64 - ( (c)<<3 ) ) ) )
#define mm256_ror64_8( v, c ) \
_mm256_or_si256( _mm256_srli_epi64( v, ( ( (c)<<3 ) ), \
_mm256_slli_epi64( v, ( ( 64 - ( (c)<<3 ) ) ) )
#endif
// Rotate elements in each 32 bit lane
#if defined(__AVX512F__) && defined(__AVX512VL__) && defined(__AVX512DQ__) && defined(__AVX512BW__)
#define mm256_swap32_16( v ) _mm256_rol_epi32( v, 16 )
#define mm256_rol32_8( v ) _mm256_rol_epi32( v, 8 )
#define mm256_ror32_8( v ) _mm256_ror_epi32( v, 8 )
#else
#define mm256_swap32_16( v ) \
_mm256_or_si256( _mm256_slli_epi32( v, 16 ), \
_mm256_srli_epi32( v, 16 ) )
#define mm256_rol32_8( v ) \
_mm256_or_si256( _mm256_slli_epi32( v, 8 ), \
_mm256_srli_epi32( v, 8 ) )
#define mm256_ror32_8( v, c ) \
_mm256_or_si256( _mm256_srli_epi32( v, 8 ), \
_mm256_slli_epi32( v, 8 ) )
#endif
static inline __m256i mm256_ror128_x8( const __m256i v, const int c )
{ return _mm256_alignr_epi8( v, v, c ); }
// Swap 32 bit elements in each 64 bit lane.
#define mm256_swap64_32( v ) _mm256_shuffle_epi32( v, 0xb1 )
//
// Swap bytes in vector elements, endian bswap.

View File

@@ -26,9 +26,6 @@
// _mm512_permutex_epi64 only shuffles within 256 bit lanes. Permute
// usually shuffles accross all lanes.
//
// Some instructions like cmp and blend use a mask regsiter now instead
// a mask vector.
//
// permutexvar has args reversed, index is first arg. Previously all
// permutes and shuffles have the index last.
//
@@ -85,52 +82,43 @@
#define mm512_mov256_64( a ) mm128_mov128_64( _mm256_castsi512_si128( a ) )
#define mm512_mov256_32( a ) mm128_mov128_32( _mm256_castsi512_si128( a ) )
// Insert and extract integers is a multistage operation.
// Insert integer into __m128i, then insert __m128i to __m256i, finally
// insert __256i into __m512i. Reverse the order for extract.
// Do not use __m512_insert_epi64 or _mm256_insert_epi64 to perform multiple
// inserts.
// Avoid small integers for multiple inserts.
// Shortcuts:
// Use castsi to reference the low bits of a vector or sub-vector. (free)
// Use mov to insert integer into low bits of vector or sub-vector. (cheap)
// Use _mm_insert only to reference the high bits of __m128i. (expensive)
// Sequence instructions to minimize data dependencies.
// Use const or const1 only when integer is either immediate or known to be in
// a GP register. Use set/set1 when data needs to be loaded from memory or
// cache.
// A simple 128 bit permute, using function instead of macro avoids
// problems if the v arg passed as an expression.
static inline __m512i mm512_perm_128( const __m512i v, const int c )
{ return _mm512_shuffle_i64x2( v, v, c ); }
// Concatenate two 256 bit vectors into one 512 bit vector {hi, lo}
#define mm512_concat_256( hi, lo ) \
_mm512_inserti64x4( _mm512_castsi256_si512( lo ), hi, 1 )
// Equivalent of set, assign 64 bit integers to respective 64 bit elements.
// Use stack memory overlay
static inline __m512i m512_const_64( const uint64_t i7, const uint64_t i6,
const uint64_t i5, const uint64_t i4,
const uint64_t i3, const uint64_t i2,
const uint64_t i1, const uint64_t i0 )
{
__m256i hi, lo;
__m128i hi1, lo1;
lo = mm256_mov64_256( i0 );
lo1 = mm128_mov64_128( i2 );
hi = mm256_mov64_256( i4 );
hi1 = mm128_mov64_128( i6 );
lo = _mm256_castsi128_si256(
_mm_insert_epi64( _mm256_castsi256_si128( lo ), i1, 1 ) );
lo1 = _mm_insert_epi64( lo1, i3, 1 );
hi = _mm256_castsi128_si256(
_mm_insert_epi64( _mm256_castsi256_si128( hi ), i5, 1 ) );
hi1 = _mm_insert_epi64( hi1, i7, 1 );
lo = _mm256_inserti128_si256( lo, lo1, 1 );
hi = _mm256_inserti128_si256( hi, hi1, 1 );
return mm512_concat_256( hi, lo );
union { __m512i m512i;
uint64_t u64[8]; } v;
v.u64[0] = i0; v.u64[1] = i1;
v.u64[2] = i2; v.u64[3] = i3;
v.u64[4] = i4; v.u64[5] = i5;
v.u64[6] = i6; v.u64[7] = i7;
return v.m512i;
}
// Equivalent of set1, broadcast 64 bit constant to all 64 bit elements.
#define m512_const1_256( v ) _mm512_broadcast_i64x4( v )
#define m512_const1_128( v ) _mm512_broadcast_i64x2( v )
// Equivalent of set1, broadcast lo element all elements.
static inline __m512i m512_const1_256( const __m256i v )
{ return _mm512_inserti64x4( _mm512_castsi256_si512( v ), v, 1 ); }
#define m512_const1_128( v ) \
mm512_perm_128( _mm512_castsi128_si512( v ), 0 )
// Integer input argument up to 64 bits
#define m512_const1_i128( i ) \
mm512_perm_128( _mm512_castsi128_si512( mm128_mov64_128( i ) ), 0 )
//#define m512_const1_256( v ) _mm512_broadcast_i64x4( v )
//#define m512_const1_128( v ) _mm512_broadcast_i64x2( v )
#define m512_const1_64( i ) _mm512_broadcastq_epi64( mm128_mov64_128( i ) )
#define m512_const1_32( i ) _mm512_broadcastd_epi32( mm128_mov32_128( i ) )
#define m512_const1_16( i ) _mm512_broadcastw_epi16( mm128_mov32_128( i ) )
@@ -142,23 +130,17 @@ static inline __m512i m512_const_64( const uint64_t i7, const uint64_t i6,
#define m512_const2_64( i1, i0 ) \
m512_const1_128( m128_const_64( i1, i0 ) )
#define m512_const2_32( i1, i0 ) \
m512_const1_64( ( (uint64_t)(i1) << 32 ) | ( (uint64_t)(i0) & 0xffffffff ) )
// { m128_1, m128_1, m128_0, m128_0 }
#define m512_const_2x128( v1, v0 ) \
m512_mask_blend_epi64( 0x0f, m512_const1_128( v1 ), m512_const1_128( v0 ) )
static inline __m512i m512_const4_64( const uint64_t i3, const uint64_t i2,
const uint64_t i1, const uint64_t i0 )
{
__m256i lo = mm256_mov64_256( i0 );
__m128i hi = mm128_mov64_128( i2 );
lo = _mm256_castsi128_si256(
_mm_insert_epi64( _mm256_castsi256_si128(
lo ), i1, 1 ) );
hi = _mm_insert_epi64( hi, i3, 1 );
return _mm512_broadcast_i64x4( _mm256_inserti128_si256( lo, hi, 1 ) );
union { __m512i m512i;
uint64_t u64[8]; } v;
v.u64[0] = v.u64[4] = i0;
v.u64[1] = v.u64[5] = i1;
v.u64[2] = v.u64[6] = i2;
v.u64[3] = v.u64[7] = i3;
return v.m512i;
}
//
@@ -170,14 +152,15 @@ static inline __m512i m512_const4_64( const uint64_t i3, const uint64_t i2,
#define m512_zero _mm512_setzero_si512()
#define m512_one_512 mm512_mov64_512( 1 )
#define m512_one_256 _mm512_broadcast_i64x4 ( mm256_mov64_256( 1 ) )
#define m512_one_128 _mm512_broadcast_i64x2 ( mm128_mov64_128( 1 ) )
#define m512_one_64 _mm512_broadcastq_epi64( mm128_mov64_128( 1 ) )
#define m512_one_32 _mm512_broadcastd_epi32( mm128_mov64_128( 1 ) )
#define m512_one_16 _mm512_broadcastw_epi16( mm128_mov64_128( 1 ) )
#define m512_one_8 _mm512_broadcastb_epi8 ( mm128_mov64_128( 1 ) )
#define m512_one_256 _mm512_inserti64x4( m512_one_512, m256_one_256, 1 )
#define m512_one_128 m512_const1_i128( 1 )
#define m512_one_64 m512_const1_64( 1 )
#define m512_one_32 m512_const1_32( 1 )
#define m512_one_16 m512_const1_16( 1 )
#define m512_one_8 m512_const1_8( 1 )
#define m512_neg1 m512_const1_64( 0xffffffffffffffff )
//#define m512_neg1 m512_const1_64( 0xffffffffffffffff )
#define m512_neg1 _mm512_movm_epi64( 0xff )
//
// Basic operations without SIMD equivalent
@@ -242,15 +225,6 @@ static inline void memcpy_512( __m512i *dst, const __m512i *src, const int n )
_mm512_xor_si512( _mm512_xor_si512( a, b ), _mm512_xor_si512( c, d ) )
// Horizontal vector testing
// Returns bit __mmask8
#define mm512_allbits0( a ) _mm512_cmpeq_epi64_mask( a, m512_zero )
#define mm512_allbits1( a ) _mm512_cmpeq_epi64_mask( a, m512_neg1 )
#define mm512_anybits0( a ) _mm512_cmpneq_epi64_mask( a, m512_neg1 )
#define mm512_anybits1( a ) _mm512_cmpneq_epi64_mask( a, m512_zero )
//
// Bit rotations.
@@ -262,37 +236,47 @@ static inline void memcpy_512( __m512i *dst, const __m512i *src, const int n )
// _mm512_rolv_epi64, _mm512_rorv_epi64, _mm512_rolv_epi32, _mm512_rorv_epi32
//
// For convenience and consistency with AVX2
#define mm512_ror_64 _mm512_ror_epi64
#define mm512_rol_64 _mm512_rol_epi64
#define mm512_ror_32 _mm512_ror_epi32
#define mm512_rol_32 _mm512_rol_epi32
#define mm512_ror_var_64( v, c ) \
_mm512_or_si512( _mm512_srli_epi64( v, c ), \
_mm512_slli_epi64( v, 64-(c) ) )
static inline __m512i mm512_ror_var_64( const __m512i v, const int c )
{
return _mm512_or_si512( _mm512_srli_epi64( v, c ),
_mm512_slli_epi64( v, 64-c ) );
}
#define mm512_rol_var_64( v, c ) \
_mm512_or_si512( _mm512_slli_epi64( v, c ), \
_mm512_srli_epi64( v, 64-(c) ) )
static inline __m512i mm512_rol_var_64( const __m512i v, const int c )
{
return _mm512_or_si512( _mm512_slli_epi64( v, c ),
_mm512_srli_epi64( v, 64-c ) );
}
#define mm512_ror_var_32( v, c ) \
_mm512_or_si512( _mm512_srli_epi32( v, c ), \
_mm512_slli_epi32( v, 32-(c) ) )
static inline __m512i mm512_ror_var_32( const __m512i v, const int c )
{
return _mm512_or_si512( _mm512_srli_epi32( v, c ),
_mm512_slli_epi32( v, 32-c ) );
}
#define mm512_rol_var_32( v, c ) \
_mm512_or_si512( _mm512_slli_epi32( v, c ), \
_mm512_srli_epi32( v, 32-(c) ) )
// Here is a fixed bit rotate for 16 bit elements:
#define mm512_ror_16( v, c ) \
_mm512_or_si512( _mm512_srli_epi16( v, c ), \
_mm512_slli_epi16( v, 16-(c) )
#define mm512_rol_16( v, c ) \
_mm512_or_si512( _mm512_slli_epi16( v, c ), \
_mm512_srli_epi16( v, 16-(c) )
static inline __m512i mm512_rol_var_32( const __m512i v, const int c )
{
return _mm512_or_si512( _mm512_slli_epi32( v, c ),
_mm512_srli_epi32( v, 32-c ) );
}
static inline __m512i mm512_ror_16( __m512i const v, const int c )
{
return _mm512_or_si512( _mm512_srli_epi16( v, c ),
_mm512_slli_epi16( v, 16-c ) );
}
static inline __m512i mm512_rol_16( const __m512i v, const int c )
{
return _mm512_or_si512( _mm512_slli_epi16( v, c ),
_mm512_srli_epi16( v, 16-c ) );
}
// Rotations using a vector control index are very slow due to overhead
// to generate the index vector. Repeated rotations using the same index
@@ -363,25 +347,32 @@ static inline void memcpy_512( __m512i *dst, const __m512i *src, const int n )
//
// Rotate elements in 512 bit vector.
static inline __m512i mm512_swap_256( const __m512i v )
{ return _mm512_alignr_epi64( v, v, 4 ); }
#define mm512_swap_256( v ) _mm512_alignr_epi64( v, v, 4 )
static inline __m512i mm512_ror_1x128( const __m512i v )
{ return _mm512_alignr_epi64( v, v, 2 ); }
// 1x64 notation used to disinguish from bit rotation.
#define mm512_ror_1x128( v ) _mm512_alignr_epi64( v, v, 2 )
#define mm512_rol_1x128( v ) _mm512_alignr_epi64( v, v, 6 )
static inline __m512i mm512_rol_1x128( const __m512i v )
{ return _mm512_alignr_epi64( v, v, 6 ); }
#define mm512_ror_1x64( v ) _mm512_alignr_epi64( v, v, 1 )
#define mm512_rol_1x64( v ) _mm512_alignr_epi64( v, v, 7 )
static inline __m512i mm512_ror_1x64( const __m512i v )
{ return _mm512_alignr_epi64( v, v, 1 ); }
#define mm512_ror_1x32( v ) _mm512_alignr_epi32( v, v, 1 )
#define mm512_rol_1x32( v ) _mm512_alignr_epi32( v, v, 15 )
static inline __m512i mm512_rol_1x64( const __m512i v )
{ return _mm512_alignr_epi64( v, v, 7 ); }
// Generic for odd rotations
#define mm512_ror_x64( v, n ) _mm512_alignr_epi64( v, v, n )
#define mm512_rol_x64( v, n ) _mm512_alignr_epi64( v, v, 8-(n) )
static inline __m512i mm512_ror_1x32( const __m512i v )
{ return _mm512_alignr_epi32( v, v, 1 ); }
#define mm512_ror_x32( v, n ) _mm512_alignr_epi32( v, v, n )
#define mm512_rol_x32( v, n ) _mm512_alignr_epi32( v, v, 16-(n) )
static inline __m512i mm512_rol_1x32( const __m512i v )
{ return _mm512_alignr_epi32( v, v, 15 ); }
static inline __m512i mm512_ror_x64( const __m512i v, const int n )
{ return _mm512_alignr_epi64( v, v, n ); }
static inline __m512i mm512_ror_x32( const __m512i v, const int n )
{ return _mm512_alignr_epi32( v, v, n ); }
#define mm512_ror_1x16( v ) \
_mm512_permutexvar_epi16( m512_const_64( \
@@ -411,38 +402,6 @@ static inline void memcpy_512( __m512i *dst, const __m512i *src, const int n )
0x1E1D1C1B1A191817, 0x161514131211100F, \
0x0E0D0C0B0A090807, 0x060504030201003F ) )
// Invert vector: {3,2,1,0} -> {0,1,2,3}
#define mm512_invert_256( v ) \
_mm512_permutexvar_epi64( v, m512_const_64( 3,2,1,0,7,6,5,4 ) )
#define mm512_invert_128( v ) \
_mm512_permutexvar_epi64( v, m512_const_64( 1,0,3,2,5,4,7,6 ) )
#define mm512_invert_64( v ) \
_mm512_permutexvar_epi64( v, m512_const_64( 0,1,2,3,4,5,6,7 ) )
#define mm512_invert_32( v ) \
_mm512_permutexvar_epi32( m512_const_64( \
0x0000000000000001,0x0000000200000003, \
0x0000000400000005,0x0000000600000007, \
0x0000000800000009,0x0000000a0000000b, \
0x0000000c0000000d,0x0000000e0000000f ), v )
#define mm512_invert_16( v ) \
_mm512_permutexvar_epi16( m512_const_64( \
0x0000000100020003, 0x0004000500060007, \
0x00080009000A000B, 0x000C000D000E000F, \
0x0010001100120013, 0x0014001500160017, \
0x00180019001A001B, 0x001C001D001E001F ), v )
#define mm512_invert_8( v ) \
_mm512_shuffle_epi8( v, m512_const_64( \
0x0001020304050607, 0x08090A0B0C0D0E0F, \
0x1011121314151617, 0x18191A1B1C1D1E1F, \
0x2021222324252627, 0x28292A2B2C2D2E2F, \
0x3031323334353637, 0x38393A3B3C3D3E3F ) )
//
// Rotate elements within 256 bit lanes of 512 bit vector.
@@ -450,11 +409,10 @@ static inline void memcpy_512( __m512i *dst, const __m512i *src, const int n )
#define mm512_swap256_128( v ) _mm512_permutex_epi64( v, 0x4e )
// Rotate 256 bit lanes by one 64 bit element
#define mm512_ror256_64( v ) _mm512_permutex_epi64( v, 0x39 )
#define mm512_rol256_64( v ) _mm512_permutex_epi64( v, 0x93 )
#define mm512_ror256_64( v ) _mm512_permutex_epi64( v, 0x39 )
#define mm512_rol256_64( v ) _mm512_permutex_epi64( v, 0x93 )
// Rotate 256 bit lanes by one 32 bit element
#define mm512_ror256_32( v ) \
_mm512_permutexvar_epi32( m512_const_64( \
0x000000080000000f, 0x0000000e0000000d, \
@@ -488,68 +446,41 @@ static inline void memcpy_512( __m512i *dst, const __m512i *src, const int n )
0x203f3e3d3c3b3a39, 0x3837363534333231, \
0x302f2e2d2c2b2a29, 0x2827262524232221, \
0x001f1e1d1c1b1a19, 0x1817161514131211, \
0x100f0e0d0c0b0a09, 0x0807060504030201 ), v )
0x100f0e0d0c0b0a09, 0x0807060504030201 ) )
#define mm512_rol256_8( v ) \
_mm512_shuffle_epi8( v, m512_const_64( \
0x3e3d3c3b3a393837, 0x363534333231302f, \
0x2e2d2c2b2a292827, 0x262524232221203f, \
0x1e1d1c1b1a191817, 0x161514131211100f, \
0x0e0d0c0b0a090807, 0x060504030201001f ), v )
0x0e0d0c0b0a090807, 0x060504030201001f ) )
//
// Rotate elements within 128 bit lanes of 512 bit vector.
// Swap hi & lo 64 bits in each 128 bit lane
#define mm512_swap128_64( v ) _mm512_shuffle_epi32( v, 0x4e )
// Swap 64 bits in each 128 bit lane
#define mm512_swap128_64( v ) _mm512_shuffle_epi32( v, 0x4e )
// Rotate 128 bit lanes by one 32 bit element
#define mm512_ror128_32( v ) _mm512_shuffle_epi32( v, 0x39 )
#define mm512_rol128_32( v ) _mm512_shuffle_epi32( v, 0x93 )
#define mm512_ror128_32( v ) _mm512_shuffle_epi32( v, 0x39 )
#define mm512_rol128_32( v ) _mm512_shuffle_epi32( v, 0x93 )
#define mm512_ror128_x8( v, c ) _mm512_alignr_epi8( v, v, c )
// Rotate right 128 bit lanes by c bytes
static inline __m512i mm512_ror128_x8( const __m512i v, const int c )
{ return _mm512_alignr_epi8( v, v, c ); }
/*
// Rotate 128 bit lanes by c bytes, faster than building that monstrous
// constant above.
#define mm512_ror128_8( v, c ) \
_mm512_or_si512( _mm512_bsrli_epi128( v, c ), \
_mm512_bslli_epi128( v, 16-(c) ) )
#define mm512_rol128_8( v, c ) \
_mm512_or_si512( _mm512_bslli_epi128( v, c ), \
_mm512_bsrli_epi128( v, 16-(c) ) )
*/
//
// Rotate elements within 64 bit lanes.
#define mm512_rol64_x8( v, c ) _mm512_rol_epi64( v, ((c)<<3) )
#define mm512_ror64_x8( v, c ) _mm512_ror_epi64( v, ((c)<<3) )
// Swap 32 bit elements in each 64 bit lane
#define mm512_swap64_32( v ) _mm512_shuffle_epi32( v, 0xb1 )
// Rotate each 64 bit lane by one 16 bit element.
#define mm512_ror64_16( v ) _mm512_ror_epi64( v, 16 )
#define mm512_rol64_16( v ) _mm512_rol_epi64( v, 16 )
#define mm512_ror64_8( v ) _mm512_ror_epi64( v, 8 )
#define mm512_rol64_8( v ) _mm512_rol_epi64( v, 8 )
//
// Rotate elements within 32 bit lanes.
#define mm512_rol32_x8( v, c ) _mm512_rol_epi32( v, ((c)<<2) )
#define mm512_ror32_x8( v, c ) _mm512_ror_epi32( v, ((c)<<2) )
// Swap 32 bits in each 64 bit lane.
#define mm512_swap64_32( v ) _mm512_shuffle_epi32( v, 0xb1 )
//
// Rotate elements from 2 512 bit vectors in place, source arguments
// are overwritten.
#define mm512_swap1024_512(v1, v2) \
v1 = _mm512_xor_si512(v1, v2); \
v2 = _mm512_xor_si512(v1, v2); \
v1 = _mm512_xor_si512(v1, v2);
#define mm512_swap1024_512( v1, v2 ) \
v1 = _mm512_xor_si512( v1, v2 ); \
v2 = _mm512_xor_si512( v1, v2 ); \
v1 = _mm512_xor_si512( v1, v2 );
#define mm512_ror1024_256( v1, v2 ) \
do { \

View File

@@ -1,18 +1,18 @@
#if !defined(SIMD_64_H__)
#define SIMD_64_H__ 1
#if defined(__MMX__)
#if defined(__MMX__) && defined(__SSE__)
////////////////////////////////////////////////////////////////
//
// 64 bit MMX vectors.
//
// There are rumours MMX wil be removed. Although casting with int64
// works there is likely some overhead to move the data to An MMX register
// and back.
// This code is not used anywhere annd likely never will. It's intent was
// to support 2 way parallel hashing using SSE2 for 64 bit, and MMX for 32
// bit hash functions, but was never implemented.
// Pseudo constants
/*
#define m64_zero _mm_setzero_si64()
#define m64_one_64 _mm_set_pi32( 0UL, 1UL )
@@ -30,79 +30,67 @@
#define casti_m64(p,i) (((__m64*)(p))[(i)])
// cast all arguments as the're likely to be uint64_t
// Bitwise not: ~(a)
//#define mm64_not( a ) _mm_xor_si64( (__m64)a, m64_neg1 )
#define mm64_not( a ) ( (__m64)( ~( (uint64_t)(a) ) )
// Unary negate elements
#define mm64_negate_32( v ) _mm_sub_pi32( m64_zero, (__m64)v )
#define mm64_negate_16( v ) _mm_sub_pi16( m64_zero, (__m64)v )
#define mm64_negate_8( v ) _mm_sub_pi8( m64_zero, (__m64)v )
#define mm64_negate_32( v ) _mm_sub_pi32( m64_zero, v )
#define mm64_negate_16( v ) _mm_sub_pi16( m64_zero, v )
#define mm64_negate_8( v ) _mm_sub_pi8( m64_zero, v )
// Rotate bits in packed elements of 64 bit vector
#define mm64_rol_64( a, n ) \
_mm_or_si64( _mm_slli_si64( (__m64)(a), n ), \
_mm_srli_si64( (__m64)(a), 64-(n) ) )
_mm_or_si64( _mm_slli_si64( a, n ), \
_mm_srli_si64( a, 64-(n) ) )
#define mm64_ror_64( a, n ) \
_mm_or_si64( _mm_srli_si64( (__m64)(a), n ), \
_mm_slli_si64( (__m64)(a), 64-(n) ) )
_mm_or_si64( _mm_srli_si64( a, n ), \
_mm_slli_si64( a, 64-(n) ) )
#define mm64_rol_32( a, n ) \
_mm_or_si64( _mm_slli_pi32( (__m64)(a), n ), \
_mm_srli_pi32( (__m64)(a), 32-(n) ) )
_mm_or_si64( _mm_slli_pi32( a, n ), \
_mm_srli_pi32( a, 32-(n) ) )
#define mm64_ror_32( a, n ) \
_mm_or_si64( _mm_srli_pi32( (__m64)(a), n ), \
_mm_slli_pi32( (__m64)(a), 32-(n) ) )
_mm_or_si64( _mm_srli_pi32( a, n ), \
_mm_slli_pi32( a, 32-(n) ) )
#define mm64_rol_16( a, n ) \
_mm_or_si64( _mm_slli_pi16( (__m64)(a), n ), \
_mm_srli_pi16( (__m64)(a), 16-(n) ) )
_mm_or_si64( _mm_slli_pi16( a, n ), \
_mm_srli_pi16( a, 16-(n) ) )
#define mm64_ror_16( a, n ) \
_mm_or_si64( _mm_srli_pi16( (__m64)(a), n ), \
_mm_slli_pi16( (__m64)(a), 16-(n) ) )
_mm_or_si64( _mm_srli_pi16( a, n ), \
_mm_slli_pi16( a, 16-(n) ) )
// Rotate packed elements accross lanes. Useful for byte swap and byte
// rotation.
// _mm_shuffle_pi8 requires SSSE3 while _mm_shuffle_pi16 requires SSE
// even though these are MMX instructions.
// Swap hi & lo 32 bits.
#define mm64_swap32( a ) _mm_shuffle_pi16( (__m64)(a), 0x4e )
#define mm64_swap_32( a ) _mm_shuffle_pi16( a, 0x4e )
#define mm64_ror1x16_64( a ) _mm_shuffle_pi16( (__m64)(a), 0x39 )
#define mm64_rol1x16_64( a ) _mm_shuffle_pi16( (__m64)(a), 0x93 )
#define mm64_ror64_1x16( a ) _mm_shuffle_pi16( a, 0x39 )
#define mm64_rol64_1x16( a ) _mm_shuffle_pi16( a, 0x93 )
// Swap hi & lo 16 bits of each 32 bit element
#define mm64_swap16_32( a ) _mm_shuffle_pi16( (__m64)(a), 0xb1 )
#define mm64_swap32_16( a ) _mm_shuffle_pi16( a, 0xb1 )
#if defined(__SSSE3__)
// Endian byte swap packed elements
// A vectorized version of the u64 bswap, use when data already in MMX reg.
#define mm64_bswap_64( v ) \
_mm_shuffle_pi8( (__m64)v, (__m64)0x0001020304050607 )
#define mm64_bswap_32( v ) \
_mm_shuffle_pi8( (__m64)v, (__m64)0x0405060700010203 )
_mm_shuffle_pi8( v, (__m64)0x0405060700010203 )
#define mm64_bswap_16( v ) \
_mm_shuffle_pi8( (__m64)v, (__m64)0x0607040502030001 );
_mm_shuffle_pi8( v, (__m64)0x0607040502030001 );
// Rotate right by c bytes
static inline __m64 mm64_ror_x8( __m64 v, const int c )
{ return _mm_alignr_pi8( v, v, c ); }
#else
#define mm64_bswap_64( v ) \
(__m64)__builtin_bswap64( (uint64_t)v )
// These exist only for compatibility with CPUs without SSSE3. MMX doesn't
// have extract 32 instruction so pointers are needed to access elements.
// It' more efficient for the caller to use scalar variables and call
// bswap_32 directly.
#define mm64_bswap_32( v ) \
_mm_set_pi32( __builtin_bswap32( ((uint32_t*)&v)[1] ), \
__builtin_bswap32( ((uint32_t*)&v)[0] ) )
@@ -115,17 +103,6 @@
#endif
// 64 bit mem functions use integral sizes instead of bytes, data must
// be aligned to 64 bits.
static inline void memcpy_m64( __m64 *dst, const __m64 *src, int n )
{ for ( int i = 0; i < n; i++ ) dst[i] = src[i]; }
static inline void memset_zero_m64( __m64 *src, int n )
{ for ( int i = 0; i < n; i++ ) src[i] = (__m64)0ULL; }
static inline void memset_m64( __m64 *dst, const __m64 a, int n )
{ for ( int i = 0; i < n; i++ ) dst[i] = a; }
#endif // MMX
#endif // SIMD_64_H__

View File

@@ -1,69 +1,20 @@
#if !defined(SIMD_INT_H__)
#define SIMD_INT_H__ 1
///////////////////////////////////
//
// Integers up to 128 bits.
//
// These utilities enhance support for integers up to 128 bits.
// All standard operations are supported on 128 bit integers except
// numeric constant representation and IO. 128 bit integers must be built
// and displayed as 2 64 bit halves, just like the old times.
//
// Some utilities are also provided for smaller integers, most notably
// bit rotation.
// MMX has no extract instruction for 32 bit elements so this:
// Lo is trivial, high is a simple shift.
// Input may be uint64_t or __m64, returns uint32_t.
#define u64_extr_lo32(a) ( (uint32_t)( (uint64_t)(a) ) )
#define u64_extr_hi32(a) ( (uint32_t)( ((uint64_t)(a)) >> 32) )
#define u64_extr_32( a, n ) ( (uint32_t)( (a) >> ( ( 2-(n)) <<5 ) ) )
#define u64_extr_16( a, n ) ( (uint16_t)( (a) >> ( ( 4-(n)) <<4 ) ) )
#define u64_extr_8( a, n ) ( (uint8_t) ( (a) >> ( ( 8-(n)) <<3 ) ) )
// Rotate bits in various sized integers.
#define u64_ror_64( x, c ) \
(uint64_t)( ( (uint64_t)(x) >> (c) ) | ( (uint64_t)(x) << (64-(c)) ) )
#define u64_rol_64( x, c ) \
(uint64_t)( ( (uint64_t)(x) << (c) ) | ( (uint64_t)(x) >> (64-(c)) ) )
#define u32_ror_32( x, c ) \
(uint32_t)( ( (uint32_t)(x) >> (c) ) | ( (uint32_t)(x) << (32-(c)) ) )
#define u32_rol_32( x, c ) \
(uint32_t)( ( (uint32_t)(x) << (c) ) | ( (uint32_t)(x) >> (32-(c)) ) )
#define u16_ror_16( x, c ) \
(uint16_t)( ( (uint16_t)(x) >> (c) ) | ( (uint16_t)(x) << (16-(c)) ) )
#define u16_rol_16( x, c ) \
(uint16_t)( ( (uint16_t)(x) << (c) ) | ( (uint16_t)(x) >> (16-(c)) ) )
#define u8_ror_8( x, c ) \
(uint8_t) ( ( (uint8_t) (x) >> (c) ) | ( (uint8_t) (x) << ( 8-(c)) ) )
#define u8_rol_8( x, c ) \
(uint8_t) ( ( (uint8_t) (x) << (c) ) | ( (uint8_t) (x) >> ( 8-(c)) ) )
// Endian byte swap
#define bswap_64( a ) __builtin_bswap64( a )
#define bswap_32( a ) __builtin_bswap32( a )
// 64 bit mem functions use integral sizes instead of bytes, data must
// be aligned to 64 bits. Mostly for scaled indexing convenience.
static inline void memcpy_64( uint64_t *dst, const uint64_t *src, int n )
{ for ( int i = 0; i < n; i++ ) dst[i] = src[i]; }
static inline void memset_zero_64( uint64_t *src, int n )
{ for ( int i = 0; i < n; i++ ) src[i] = 0ull; }
static inline void memset_64( uint64_t *dst, const uint64_t a, int n )
{ for ( int i = 0; i < n; i++ ) dst[i] = a; }
// safe division, integer or floating point
#define safe_div( dividend, divisor, safe_result ) \
( (divisor) == 0 ? safe_result : ( (dividend) / (divisor) ) )
///////////////////////////////////////
//
// 128 bit integers
//
// 128 bit integers are inneficient and not a shortcut for __m128i.
// 128 bit integers are inneficient and not a shortcut for __m128i.
// Native type __int128 supported starting with GCC-4.8.
//
// __int128 uses two 64 bit GPRs to hold the data. The main benefits are
@@ -94,31 +45,12 @@ static inline void memset_64( uint64_t *dst, const uint64_t a, int n )
typedef __int128 int128_t;
typedef unsigned __int128 uint128_t;
// Maybe usefull for making constants.
#define mk_uint128( hi, lo ) \
( ( (uint128_t)(hi) << 64 ) | ( (uint128_t)(lo) ) )
// Extracting the low bits is a trivial cast.
// These specialized functions are optimized while providing a
// consistent interface.
#define u128_hi64( x ) ( (uint64_t)( (uint128_t)(x) >> 64 ) )
#define u128_lo64( x ) ( (uint64_t)(x) )
// Generic extract, don't use for extracting low bits, cast instead.
#define u128_extr_64( a, n ) ( (uint64_t)( (a) >> ( ( 2-(n)) <<6 ) ) )
#define u128_extr_32( a, n ) ( (uint32_t)( (a) >> ( ( 4-(n)) <<5 ) ) )
#define u128_extr_16( a, n ) ( (uint16_t)( (a) >> ( ( 8-(n)) <<4 ) ) )
#define u128_extr_8( a, n ) ( (uint8_t) ( (a) >> ( (16-(n)) <<3 ) ) )
// Not much need for this but it fills a gap.
#define u128_ror_128( x, c ) \
( ( (uint128_t)(x) >> (c) ) | ( (uint128_t)(x) << (128-(c)) ) )
#define u128_rol_128( x, c ) \
( ( (uint128_t)(x) << (c) ) | ( (uint128_t)(x) >> (128-(c)) ) )
#endif // GCC_INT128
#endif // SIMD_INT_H__

184
util.c
View File

@@ -943,6 +943,140 @@ bool jobj_binary(const json_t *obj, const char *key, void *buf, size_t buflen)
return true;
}
static uint32_t bech32_polymod_step(uint32_t pre) {
uint8_t b = pre >> 25;
return ((pre & 0x1FFFFFF) << 5) ^
(-((b >> 0) & 1) & 0x3b6a57b2UL) ^
(-((b >> 1) & 1) & 0x26508e6dUL) ^
(-((b >> 2) & 1) & 0x1ea119faUL) ^
(-((b >> 3) & 1) & 0x3d4233ddUL) ^
(-((b >> 4) & 1) & 0x2a1462b3UL);
}
static const int8_t bech32_charset_rev[128] = {
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
15, -1, 10, 17, 21, 20, 26, 30, 7, 5, -1, -1, -1, -1, -1, -1,
-1, 29, -1, 24, 13, 25, 9, 8, 23, -1, 18, 22, 31, 27, 19, -1,
1, 0, 3, 16, 11, 28, 12, 14, 6, 4, 2, -1, -1, -1, -1, -1,
-1, 29, -1, 24, 13, 25, 9, 8, 23, -1, 18, 22, 31, 27, 19, -1,
1, 0, 3, 16, 11, 28, 12, 14, 6, 4, 2, -1, -1, -1, -1, -1
};
static bool bech32_decode(char *hrp, uint8_t *data, size_t *data_len, const char *input) {
uint32_t chk = 1;
size_t i;
size_t input_len = strlen(input);
size_t hrp_len;
int have_lower = 0, have_upper = 0;
if (input_len < 8 || input_len > 90) {
return false;
}
*data_len = 0;
while (*data_len < input_len && input[(input_len - 1) - *data_len] != '1') {
++(*data_len);
}
hrp_len = input_len - (1 + *data_len);
if (1 + *data_len >= input_len || *data_len < 6) {
return false;
}
*(data_len) -= 6;
for (i = 0; i < hrp_len; ++i) {
int ch = input[i];
if (ch < 33 || ch > 126) {
return false;
}
if (ch >= 'a' && ch <= 'z') {
have_lower = 1;
} else if (ch >= 'A' && ch <= 'Z') {
have_upper = 1;
ch = (ch - 'A') + 'a';
}
hrp[i] = ch;
chk = bech32_polymod_step(chk) ^ (ch >> 5);
}
hrp[i] = 0;
chk = bech32_polymod_step(chk);
for (i = 0; i < hrp_len; ++i) {
chk = bech32_polymod_step(chk) ^ (input[i] & 0x1f);
}
++i;
while (i < input_len) {
int v = (input[i] & 0x80) ? -1 : bech32_charset_rev[(int)input[i]];
if (input[i] >= 'a' && input[i] <= 'z') have_lower = 1;
if (input[i] >= 'A' && input[i] <= 'Z') have_upper = 1;
if (v == -1) {
return false;
}
chk = bech32_polymod_step(chk) ^ v;
if (i + 6 < input_len) {
data[i - (1 + hrp_len)] = v;
}
++i;
}
if (have_lower && have_upper) {
return false;
}
return chk == 1;
}
static bool convert_bits(uint8_t *out, size_t *outlen, int outbits, const uint8_t *in, size_t inlen, int inbits, int pad) {
uint32_t val = 0;
int bits = 0;
uint32_t maxv = (((uint32_t)1) << outbits) - 1;
while (inlen--) {
val = (val << inbits) | *(in++);
bits += inbits;
while (bits >= outbits) {
bits -= outbits;
out[(*outlen)++] = (val >> bits) & maxv;
}
}
if (pad) {
if (bits) {
out[(*outlen)++] = (val << (outbits - bits)) & maxv;
}
} else if (((val << (outbits - bits)) & maxv) || bits >= inbits) {
return false;
}
return true;
}
static bool segwit_addr_decode(int *witver, uint8_t *witdata, size_t *witdata_len, const char *addr) {
uint8_t data[84];
char hrp_actual[84];
size_t data_len;
if (!bech32_decode(hrp_actual, data, &data_len, addr)) return false;
if (data_len == 0 || data_len > 65) return false;
if (data[0] > 16) return false;
*witdata_len = 0;
if (!convert_bits(witdata, witdata_len, 8, data + 1, data_len - 1, 5, 0)) return false;
if (*witdata_len < 2 || *witdata_len > 40) return false;
if (data[0] == 0 && *witdata_len != 20 && *witdata_len != 32) return false;
*witver = data[0];
return true;
}
static size_t bech32_to_script(uint8_t *out, size_t outsz, const char *addr) {
uint8_t witprog[40];
size_t witprog_len;
int witver;
if (!segwit_addr_decode(&witver, witprog, &witprog_len, addr))
return 0;
if (outsz < witprog_len + 2)
return 0;
out[0] = witver ? (0x50 + witver) : 0;
out[1] = witprog_len;
memcpy(out + 2, witprog, witprog_len);
if ( opt_debug )
applog( LOG_INFO, "Coinbase address uses Bech32 coding");
return witprog_len + 2;
}
size_t address_to_script( unsigned char *out, size_t outsz, const char *addr )
{
unsigned char addrbin[ pk_buffer_size_max ];
@@ -950,12 +1084,15 @@ size_t address_to_script( unsigned char *out, size_t outsz, const char *addr )
size_t rv;
if ( !b58dec( addrbin, outsz, addr ) )
return 0;
return bech32_to_script( out, outsz, addr );
addrver = b58check( addrbin, outsz, addr );
if ( addrver < 0 )
return 0;
if ( opt_debug )
applog( LOG_INFO, "Coinbase address uses B58 coding");
switch ( addrver )
{
case 5: /* Bitcoin script hash */
@@ -1048,53 +1185,51 @@ bool fulltest( const uint32_t *hash, const uint32_t *target )
return rc;
}
// Mathmatically the difficulty is simply the reciprocal of the hash.
// Mathmatically the difficulty is simply the reciprocal of the hash: d = 1/h.
// Both are real numbers but the hash (target) is represented as a 256 bit
// number with the upper 32 bits representing the whole integer part and the
// lower 224 bits representing the fractional part:
// fixed point number with the upper 32 bits representing the whole integer
// part and the lower 224 bits representing the fractional part:
// target[ 255:224 ] = trunc( 1/diff )
// target[ 223: 0 ] = frac( 1/diff )
//
// The 256 bit hash is exact but any floating point representation is not.
// Stratum provides the target difficulty as double precision, inexcact, and
// Stratum provides the target difficulty as double precision, inexcact,
// which must be converted to a hash target. The converted hash target will
// likely be less precise to to inexact input and conversion error.
// converted to 256 bit hash which will also be inexact and likelyless
// accurate to to error in conversion.
// likely be less precise due to inexact input and conversion error.
// On the other hand getwork provides a 256 bit hash target which is exact.
//
// How much precision is needed?
//
// 128 bit types are implemented in software by the compiler using 64 bit
// 128 bit types are implemented in software by the compiler on 64 bit
// hardware resulting in lower performance and more error than would be
// expected with a hardware 128 bit implementtaion.
// expected with a hardware 128 bit implementaion.
// Float80 exploits the internals of the FP unit which provide a 64 bit
// mantissa in an 80 bit register with hardware rounding. When the destination
// is double the data is rounded to float64 format. Long double returns all
// 80 bits without rounding and including any accumulated computation error.
// Float80 does not fit efficiently in memory.
//
// 256 bit hash: 76
// Significant digits:
// 256 bit hash: 76
// float: 7 (float32, 80 bits with rounding to 32 bits)
// double: 15 (float64, 80 bits with rounding to 64 bits)
// long double 19 (float80, 80 bits with no rounding)
// __float128 33 (128 bits with no rounding)
// long double: 19 (float80, 80 bits with no rounding)
// __float128: 33 (128 bits with no rounding)
// uint32_t: 9
// uint64_t: 19
// uint128_t 38
//
// The concept of significant digits doesn't apply to the 256 bit hash
// representation. It's fixed point making leading zeros significant
// Leading zeros count in the 256 bit
// representation. It's fixed point making leading zeros significant,
// limiting its range and precision due to fewer zon-zero significant digits.
//
// Doing calculations with float128 and uint128 increases precision for
// target_to_diff, but doesn't help with stratum diff being limited to
// double precision. Is the extra precision really worth the extra cost?
//
// With double the error rate is 1/1e15, or one hash in every Petahash
// with a very low difficulty, not a likely sitiation. Higher difficulty
// increases the effective precision. Due to the floating nature of the
// decimal point leading zeros aren't counted.
// With float128 the error rate is 1/1e33 compared with 1/1e15 for double.
// For double that's 1 error in every petahash with a very low difficulty,
// not a likely situation. With higher difficulty effective precision
// increases.
//
// Unfortunately I can't get float128 to work so long double (float80) is
// as precise as it gets.
@@ -1488,9 +1623,6 @@ static bool stratum_parse_extranonce(struct stratum_ctx *sctx, json_t *params, i
if ( !opt_quiet ) /* pool dynamic change */
applog( LOG_INFO, "Stratum extranonce1= %s, extranonce2 size= %d",
xnonce1, xn2_size);
// if (pndx == 0 && opt_debug)
// applog(LOG_DEBUG, "Stratum set nonce %s with extranonce2 size=%d",
// xnonce1, xn2_size);
return true;
out:
@@ -1640,8 +1772,6 @@ bool stratum_authorize(struct stratum_ctx *sctx, const char *user, const char *p
opt_extranonce = false;
goto out;
}
if ( !opt_quiet )
applog( LOG_INFO, "Extranonce subscription enabled" );
sret = stratum_recv_line( sctx );
if ( sret )
@@ -1660,8 +1790,8 @@ bool stratum_authorize(struct stratum_ctx *sctx, const char *user, const char *p
applog( LOG_WARNING, "Stratum answer id is not correct!" );
}
res_val = json_object_get( extra, "result" );
// if (opt_debug && (!res_val || json_is_false(res_val)))
// applog(LOG_DEBUG, "extranonce subscribe not supported");
if (opt_debug && (!res_val || json_is_false(res_val)))
applog(LOG_DEBUG, "Method extranonce.subscribe is not supported");
json_decref( extra );
}
free(sret);

80
verthash-help.txt Normal file
View File

@@ -0,0 +1,80 @@
Quickstart:
----------
First time mining verthash or don't have a Verthash data file:
--algo verthash --verify --url ...
Verthash data file already exists:
--algo verthash --data-file /path/to/verthash.dat --url ...
Background:
----------
Verthash algorithm requires a data file for hashing. This file is
static, portable, and only needs to be created once.
A Verthash data file created by VerthashMiner can also be used by cpuminer-opt
and used simultaneously by both miners.
Due to its size >1GB it is recommened one data file be created and
stored in a permanent location accessible to any miner that wants to use it.
New command line options:
------------------------
cpuminer-opt adds two new command line options for verthash. The names
and some behaviour is changed from VerthashMiner.
--data-file /path/to/verthash.dat
default when not used is verthash.dat in current working directory.
--verify
verify integrity of file specified by --data-file, or if not specified
the default data file if it exists, or create a default file and verify it
if one does not yet exist. Data file verification is disabled by default.
Detailed usage:
--------------
If a data file already exists it can be selected using the --data-file
option to specify the path and name of the file.
--algo verthash --datafile /path/to/verthash.dat --url ...
If the --data-file option is not used the default is to use 'verthash.dat'
from the current working directory.
If no data file exists it can be created by using the --verify option
without the --data-file option. If the default data file is not found in
the current directory it will be created.
--algo verthash --verify --url ...
Data file creation can take up to 30 minutes on a spinning hard drive.
Once created the new data file will be verified and used immediately
if a valid url and user were included on the command line.
A default data file can be created by ommitting the url option. That will
either verify an existing default data file or create one and verify it,
then exit.
--algo verthash --verify
A data file will never be created if --data-file is specified. The miner
will exit with an error if the file is not found. This is to avoid accidentally
creating an unwanted data file due to a typo.
After creation the data file can moved to a more convenient location and
referenced by --data-file, or left where it is and used by default without the
--data-file option.
Data file verification takes a few seconds and is disabled by default.
VerthashMiner enables data file verification by default and has an option to
disable it.
The --verify option is intended primarily to create a new file. It's
not necessary or useful to verify a file every time the miner is started.

View File

@@ -31,6 +31,7 @@ mkdir release
cp README.txt release/
cp README.md release/
cp RELEASE_NOTES release/
cp verthash-help.txt release/
cp $MINGW_LIB/zlib1.dll release/
cp $MINGW_LIB/libwinpthread-1.dll release/
cp $GCC_MINGW_LIB/libstdc++-6.dll release/