zkSync Era, Pytorch 2.0, Google and Microsoft AI competition - CS News #7
Welcome to CS News #7!
CS News helps you keep track of the latest news in different technology domains like AI, Security, Software development, Blockchain/P2P and discover new interesting projects and techniques!
🚀 MatterLab announces zkSync Era
Ethereum goal is to become an infrastructure platform for new decentralized ecosystems, called rollups. The goal of a rollup (also referred to as a Layer 2) is to give better performances and features than Ethereum while relying on its security of more than 500k validators.
A Zero Knowledge (ZK) rollup computes user transactions (like “User A sends 20 Token X to User B”) and generates a proof to easily verify that the transactions were properly executed and the blockchain's state modified correctly. This avoids the need to do the computation yourself and access all the data to verify the transactions execution. This proof is then uploaded on Ethereum (also referred to as Layer 1) to be verified by anyone.
zkSync is a ZK rollup developed by MatterLab which just launched its Era mainnet with many cool features:
Solidity 0.8.x support for smart contract development
Layer1 → Layer2 smart contract messaging that allows developers to pass data from Ethereum to smart contracts on zkSync
Support Account Abstraction: a feature to create highly custom wallet management flows (which can work with any cryptography you choose for example)
This launch follows the recent Arbitrum (an other Layer 2) token launch, which attract more and more users on Layer 2s.
Read more here
🔥 Pytorch 2.0 has been released
PyTorch 2.0 has been released, bringing a host of improvements to the widely-used deep learning framework. The updated version offers significant enhancements in performance, flexibility, and compatibility.
PyTorch 2.0 has focused its release on the creation partial graph mechanisms. In PyTorch 1.X user has the choice to either use its model in default eager mode or in graph mode using torch.fx thus improving model’s performances.
This binary on/off choice introduces some limitations where some parts of the workflow cannot be translated into graph mode leading the user to be stuck using eager mode, thus not benefiting the advantage of graph mode. This is where the partial graph mechanisms introduced by PyTorch 2.0 comes into play.
The entry point to this new partial graph mechanisms being torch.compile that uses three core components: TorchDynamo, AOT Autograd and Inductor (or another compiler). Dynamo allows the user to construct partial graph from its model, each model’s bitcode that can be translated into a graph node will be and the others will remain as python code.
This process results in a Partial Graph being created that needs to be linked with the basic Eager Automatic Differentiation System (aka Autograd). This work is handled by AOT Autograd that takes a partial graph as input and turns it into a custom autograd function.
Once the final graph created, it can be compiled using any compatible compiler (i.e nvFuser) but the one most illustrated by PyTorch 2.0 is Inductor. Inductor is a “define by run” compiler built on top of OpenAI Triton. The default usage of Triton shows PyTorch’s willing to move away from CUDA that is well-known to become a pain point for each AI company not willing to rely on non proprietary close-source solution.
Note that torch.compile
is a fully additive (and optional) feature and hence is 100% backward compatible by definition.
Moreover, PyTorch 2.0 introduces:
Accelerated Transformers: A high-performance implementation for training and inference using custom kernel architecture for scaled dot product attention.
Improved PyTorch MPS Backend: GPU-accelerated training on Mac platforms with better correctness, stability, and operator coverage.
torch.set_default_device
andtorch.device
as context manager: Users can change the default device for factory functions.Dispatchable Collectives: Improved
init_process_group()
API for easier GPU and CPU support.
For more informations on PyTorch 2.0:
→ Pytorch 2.0: An introduction
⚔️ Google and Microsoft Announce Competing AI-Powered Productivity Tools
Google recently unveiled Google GenAI, a collection of AI-driven features for Google Workspace, while Microsoft launched Copilot in a competitive move.
Generative AI technology in Google Workspace and Microsoft Copilot saves users time and effort by assisting with tasks such as email management, idea generation, proofreading, and data analysis. These features have the potential to transform work processes, boosting efficiency and collaboration across platforms.
Both companies must navigate political implications, addressing concerns about user privacy, data governance, and technology misuse. They emphasise aligning products with AI principles, ensuring user control, and enabling IT departments to set suitable policies.
As AI-powered tools become more popular, the impact on the future of work will be substantial, leading to changes in job roles and responsibilities, and potentially job displacement in some sectors. Both companies focus on user control, privacy, and security, aiming to balance benefits with ethical considerations.
💫 Interesting stuff
Docker is deleting Open Source organisations - what you need to know
Stable Diffusion Reimagine (available on clipdrop)