<p dir="auto">Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report.
<h1><a href="https://gitlab.syncad.com/hive/hive" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Hived: core blockchain node software
<h2>Split block logs (enables “lite” hived nodes for services)
<p dir="auto">We are adding support for “lite” hived nodes that don’t retain the entire blockchain. Most of the work is done at this point and the remaining portion will likely be completed in the next week or so.
<p dir="auto">Ideally, hived nodes should retain the entire blockchain history, as this provides more redundant storage to any node that is not yet in sync with the current Hive head block, allowing such nodes to get the old blocks they don’t have from their peers.
<p dir="auto">But at some point, such storage becomes overkill, and it also increases the costs to run services that require a hived node because of the associated storage costs. Currently, even with the new features for compressing block storage that cuts storage costs in half, the block_log file is 486GB in size (almost ½ a terabyte).
<h3>Introducing the <code>block-log-split option
<p dir="auto">To enable the operation of hived nodes that need to operate with less storage, there’s a new command-line option to hived called <code>block-log-split that enables a node to be operate with fewer blocks retained. By default, this value is set to -1, which means to operate in the standard way (with a single, full-size block log). Setting this value to 0 tells hived to not maintain any block history (no block_log file).
<p dir="auto">Setting the block-log-split option to any larger value tells hived to maintain at least that many million recent blocks. For example, setting <code>block-log-split=2 means keep at least the last 2 million blocks. In this mode, the block log is stored in multiple files (e.g. block_log_part.0084 contains blocks up to 84M, block_log_part.0085 contains blocks up to block 85M, and the “top” file block_log_part.0086 contains blocks from 85,000,001 to the current head block), each 1 million blocks long and split at 1 million byte boundaries (except for the currently “top” file which will only contain blocks up to the current head block).
<p dir="auto">In split block mode, only the “top” file will be written to. The other files will only be read from when the node is supplying blocks to other peer nodes that need to sync to the head block.
<p dir="auto">If you want your node to maintain the entire block chain, but store the blocks in split mode format (instead of one big file), you can simply set block-log-split to a large value like 10000. In this case, at the current time, you would have 86 block_log_part files, starting with block_log_part.0001. Note that for each block_log_part file, there is also an associated artifacts file (e.g. block_log_part.0001.artifacts).
<h3>Switching an existing node to split log mode
<p dir="auto"><span>We’re also making it easy to switch a currently operating node to split block_log mode without requiring a resync of the entire blockchain. Now all you will have to do is shutdown your node, change to the desired number of block_log parts to maintain, and restart your node. Your node will read your existing full-size block log, split it as necessary, then sync to get any blocks that were missed while it was shutdown. After the nodes has finished splitting the block log, you can delete the original full-size block log (or move it to slower storage for backup purposes). Work on this issue is being tracked here: <a href="https://gitlab.syncad.com/hive/hive/-/issues/686" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/issues/686
<p dir="auto">If you are especially space-constrained, you can start by putting the full-size block_log on a different, potentially slower, storage device, then create a symbolic link to it in your hived data directory. For node operators who plan to replay often (e.g. developers testing new versions of hived or HAF), this is probably a handy configuration, since such nodes will still need all the early blocks to perform replays.
<h2>Other hived changes
<ul>
<li><span>Merged improved market history API calls <a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1177" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1177
<li><span>Implemented runtime reporting of internal memory allocation being done inside hived multi indexes (similary to block-stats). We also added support for grafana data collection: <a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1192" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1192 <a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1269" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1269
<ul>
<li><span>New tests: <a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1257" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1257 <a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1263" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1263 <a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1260" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1260
<h1>Beekeeper (cryptographic key management software)
<p dir="auto">Beekeeper now has buffer encryption. This was needed to support Wax in Clive (Clive is the new console-based wallet) and other frontends (e.g. Denser, the upcoming replacement for Condenser).
<p dir="auto">There were also various bugfixes and API changes in beekeeper to improve security:
<ul>
<li><span><a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1278" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1278
<li><span><a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1273" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1273
<li><span><a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1270" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1270
<li><span><a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1267" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1267
<li><span><a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1265" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1265
<li><span><a href="https://gitlab.syncad.com/hive/hive/-/merge_requests/1261" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/merge_requests/1261
<h2>Python Beekeeper (a python wrapper for BeeKeeper)
<ul>
<li>Implementation of an object-specific interface to easily perform beekeeper actions. This is the first step needed to create an object-based Wax implementation for Python as was done in the Typescript wrapper.
<li>API call performance optimizations
<h1><a href="https://gitlab.syncad.com/hive/haf_api_node" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">HAF API node
<h2>Improved haf_api_node dataset structure
<p dir="auto">The haf_api_node directory structure has been changed to store the shared_memory file in a separate dataset from the blockchain. This offers a couple of benefits: 1) block_logs can be stored on a slower storage device, 2) you can set different ZFS properties such as compression level (block_logs are already compressed so no point in compressing them, but state file and WAL files may benefit from compression), and 3) you can re-use the current blockchain snapshot when you upgrade to new versions of hived that have incompatible shared memory formats (similarly, you can do a shallow clone of just the blockchain directory to run two hived instances on the same system with reduced storage, something which we are using it for now on our own servers).<br />
When you pull the latest changes, your existing stack should still work, but it will not follow the new suggested layout. To transform your existing dataset to the new layout:<br />
• docker compose down<br />
• git pull<br />
• edit your environment to use HIVE_API_NODE_VERSION=1.27.5 and set HAF_SHM_DIRECTORY="${TOP_LEVEL_DATASET_MOUNTPOINT}/shared_memory"<br />
• sudo zfs create -o atime=off -o compression=off haf-pool/haf-datadir/shared_memory<br />
• sudo chown 1000:100 /haf-pool/haf-datadir/shared_memory<br />
• sudo mv /haf-pool/haf-datadir/blockchain/shared_memory.bin /haf-pool/haf-datadir/blockchain/haf_wal /haf-pool/haf-datadir/shared_memory<br />
• docker compose up -d
<h2>Added documentation on how to compress API responses:
<p dir="auto">Most of the data the Hive API serves up compresses well. Calls like get_block()<br />
generate a lot of data, and will typically compress 3x or better. You can<br />
decrease your bandwidth (and your user's bandwidth) by enabling compression,<br />
at the expense of higher CPU usage on your server. To do this, drop code<br />
like this in a file called, say, <code>compression.snippet:
<pre><code>encode {
zstd
gzip
minimum_length 1024
}
<h1><a href="https://gitlab.syncad.com/hive/haf" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">HAF (Hive Application Framework)
<h2>Documenting HAF's new REST-based API (transitioning away from JSON-based API)
<p dir="auto">We’re created a methodology for documenting the new REST-based APIs for Hive in order to keep the documentation and the actual APIs synchronized so that the documentation doesn’t become out-of-date as changes are made over time.
<p dir="auto">Under this methodology, both the OpenAPI docs and the top-level API functions written in SQL are stored in the same file. The OpenAPI documentation is used to create Swagger-based interactive documentation that can be directly hosted on Hive API nodes in a docker container.
<p dir="auto">An API developer will first write OpenAPI function specifications for the API calls they plan to create, then run a new tool which processes the OpenAPI function specification into a SQL function prototype. The tool can also be run in-place on a file containing existing SQL API functions to update the function signatures whenever the specification of an API call needs to be changed.
<p dir="auto">The OpenAPI specification can also be used to create rewrite rules for caddy/nginx to simplify the creation of more “standard” REST APIs. Some work still needs to be done to figure out how the rewrite rules will be updated in the rewrite processing engine container (e.g. caddy/nginx/varnish) when a new API container is launched where the specs have change.
<p dir="auto"><span>More information about the new API documentation process can be found here: <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/178" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/178
<h2>Miscellaneous improvements to HAF
<ul>
<li><span>Database space optimization (reduces storage by 230GB) <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/487" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/487 <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/492" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/492
<li><span>Bugfixes: <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/483" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/483 <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/477" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/477
<li><span>Improved error handling inside HAF’s C++ extension: <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/461" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/461
<li>Support for keyauth data collector (needed by block explorer to show account authority/key info).<br /><span>
This also allows implementing the account_by_key API outside of hived: <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/357" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/357
<li><span>Support for PostgreSQL 16 (we are using Postgres 16 in the develop branch now): <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/488" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/488 <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/485" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/485
<li><span>Ability to deploy the same HAF application twice in separate schemas: <a href="https://gitlab.syncad.com/hive/haf/-/merge_requests/484" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/merge_requests/484 (and further MRs in each app)
<h1>2nd layer “Lite accounts” that are transportable across the Hive ecosystem
<p dir="auto">We are creating a new HAF app to manage the creation and maintenance of “Lite” accounts that can be used by any 2nd layer app to sign 2nd layer transactions. The specification for this also includes documentation for how we plan to support 2nd layer transactions.
<p dir="auto"><span>This is a fairly complex topic so aspects of the design are still underway and I’ll have much more to say about this in future reports, but for anyone interested in creating 2nd layer apps that require users to generate custom_json operations, I recommend reading the following link for more details on how the design is developing so far: <a href="https://gitlab.syncad.com/hive/haf/-/issues/214" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf/-/issues/214.
<h1><a href="https://gitlab.syncad.com/hive/hivemind" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Hivemind API (social media API)
<ul>
<li><span>Eliminated excessive exception logging in SQL log files (requires Postgres 15 or better): <a href="https://gitlab.syncad.com/hive/hivemind/-/merge_requests/688" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hivemind/-/merge_requests/688
<li><span>Bugfixes specific to hivemind sync stopping that could lead to data loss when killing the container: <a href="https://gitlab.syncad.com/hive/hivemind/-/merge_requests/643" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hivemind/-/merge_requests/643 <a href="https://gitlab.syncad.com/hive/hivemind/-/merge_requests/692" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hivemind/-/merge_requests/692
<li><span>Improve speed of condenser.get_followers API call: <a href="https://gitlab.syncad.com/hive/hivemind/-/merge_requests/686" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hivemind/-/merge_requests/686
<li><span>Eliminate python code from hivemind’s server process to improve performance (replacing with pure SQL and PostgREST): <a href="https://gitlab.syncad.com/hive/hivemind/-/merge_requests/691" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hivemind/-/merge_requests/691 (still in progress)
<h1><a href="https://gitlab.syncad.com/hive/haf_block_explorer" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">HAF Block Explorer and <a href="https://gitlab.syncad.com/hive/balance_tracker" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Balance Tracker APIs
<ul>
<li>Designed REST APIs structure for block explorer calls
<li><span>Verified support for postgres 16 <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/175" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/175
<li><span>Integrated keyauth provider: <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/108" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/108
<li><span>API improvements: <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/174" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/174
<li><span>Refactor and improve speed of get_account API call (execution time dropped from 1.5-2 seconds to 50ms): <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/183" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/183 <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/172" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/172
<li><span>bugfixes, improvements (i.e. disabled parts of APIs which require additional indexes): <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/184" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/184 <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/182" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/182
<li><span>block explorer embeds reputation_tracker to make available account reputation value to the client APIs: <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/177" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/177 <a href="https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/181" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/181
<h2>Block explorer UI:
<ul>
<li>Filter dialog improvements
<li>Block search result page allows op-type filtering (also URL has embedded filter)
<li>Bugfixes specific to time/date display and UI tweaks
<h1><a href="https://gitlab.syncad.com/hive/wax" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">wax (New multi-language Hive API library)
<ul>
<li><strong>tests/detailed/hive_base.ts?ref_type=heads#L240. You can use at most 2 public keys to perform the encryption (e.g. for the sender and intended receiver). For example, if you were sending “secret” commands to some game app that other players shouldn’t be able to read, you could encrypt the commands so that only the app can read them.<span>Typescript-based transaction builder now supports encryption in operations: transfer (encrypted memo), comment, custom_json (only internal json part is encrypted), transfer from/to savings . For custom json encryption, the original json contents is encrypted and wrapped into a sub-object with the key name “encrypted” and a string value like <a href="/trending/xxxxx"> #xxxxx (the same format used for encrypted transfer memos). In this way we can recognize if a given custom json will require decrypting during processing. An example of an encrypted transfer can be found here: <a href="https://gitlab.syncad.com/hive/wax/-/blob/develop/wasm/" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/wax/-/blob/develop/wasm/
<li>Bugfixes to protobuf serialization
<li>Transaction builder interface improved to make it more intuitive
<li>New tests and Playwright test fixture improvements
<p dir="auto">Wax work in progress:
<ul>
<li>API call health-checker component for apps to aid users in endpoint URL selection.
<li><span>Preprequisite steps for publication at official npm registry: <a href="https://registry.npmjs.org" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://registry.npmjs.org (Important note: package scope has changed from hive to hiveio).
<li>Benchmarks to verify library performance after its size optimizations (to be done)
<h1><a href="https://gitlab.syncad.com/hive/clive" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Clive: command-line and TUI wallet for Hive
<p dir="auto"><span>New version v1.27.5.10 released: <a href="https://gitlab.syncad.com/hive/clive/-/merge_requests/361" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/clive/-/merge_requests/361 (New features detailed in the link)
<h1>Some of our work in progress (or planned for near future)
<ul>
<li>Creation of HAF-based Lite Accounts application (implementation in progress)
<li><span>Developing spec for 2nd layer smart contract processing engine (most of docs are here: <a href="https://gitlab.syncad.com/hive/smarc/-/tree/abw_smarc_design/doc?ref_type=heads" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/smarc/-/tree/abw_smarc_design/doc?ref_type=heads )
<li><span>Many hived improvements: <a href="https://gitlab.syncad.com/hive/hive/-/issues/675" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://gitlab.syncad.com/hive/hive/-/issues/675
<li>Official release of wax and <a href="https://gitlab.syncad.com/hive/workerbee" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">workerbee npm package
<li>Finish new OpenAPI documentation of existing REST APIs (in particular block_explorer and balance_tracker)
<li>Create a release candidate for <a href="https://gitlab.syncad.com/hive/denser" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Denser (replacement for Condenser)
<li>Continue adding new commands to Clive
<li>Release <a href="https://gitlab.syncad.com/hive/reputation_tracker" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">new reputation tracker app
<li>More hivemind performance improvements (continued replacement of Python code)
<li>Integrate reputation_tracker as a sub-app inside hivemind. This should improve space optimization (now hivemind must collect all votes to recalculate reputation) and sync speed.
<li>Eliminate timestamp in HAF operations table to reduce database size
<li>Redesign HAF main loop to make it less error-prone
👏🏼👏🏼👏🏼
Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge
<table><tr><td><img src="https://images.hive.blog/60x60/https://hivebuzz.me/badges/toppayoutday.png" /><td>Post with the highest payout of the day. <p dir="auto"><sub><em>You can view your badges on <a href="https://hivebuzz.me/@blocktrades" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">your board and compare yourself to others in the <a href="https://hivebuzz.me/ranking" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Ranking<br /> <sub><em>If you no longer want to receive notifications, reply to this comment with the word <code>STOP!ALIVE
!LOL
What does "wax (New multi-language Hive API library)" mean? Is it a Typescript implementation for the WAX Blockchain call request transations HIVE > WAX or just a new Hive API in Typescript?
It's not related to the WAX blockchain.
WAX is the name of a new multi-language API library we've been developing for Hive that directly reuses core Hive code to implement some of its functionality (ensuring that the functionality doesn't diverge between the core code and the WAX library). The intent is to make it the preferred choice for most new Hive apps. Currently there are language wrappers for Typescript/Javascript and Python.
Way better than > 100 TB on Solana for the full blockchain... 😀 Different architecture there though, so we can't compare.
Keep up the great work @blocktrades you have my upvote!