AIMC Topic: Data Compression

Clear Filters Showing 51 to 60 of 150 articles

A Novel Deep-Learning Model Compression Based on Filter-Stripe Group Pruning and Its IoT Application.

Sensors (Basel, Switzerland)
Nowadays, there is a tradeoff between the deep-learning module-compression ratio and the module accuracy. In this paper, a strategy for refining the pruning quantification and weights based on neural network filters is proposed. Firstly, filters in t...

High-Quality Video Watermarking Based on Deep Neural Networks and Adjustable Subsquares Properties Algorithm.

Sensors (Basel, Switzerland)
This paper presents a method of high-capacity and transparent watermarking based on the usage of deep neural networks with the adjustable subsquares properties algorithm to encode the data of a watermark in high-quality video using the H.265/HEVC (Hi...

Auxiliary Pneumonia Classification Algorithm Based on Pruning Compression.

Computational and mathematical methods in medicine
Pneumonia infection is the leading cause of death in young children. The commonly used pneumonia detection method is that doctors diagnose through chest X-ray, and external factors easily interfere with the results. Assisting doctors in diagnosing pn...

Low-Complexity Adaptive Sampling of Block Compressed Sensing Based on Distortion Minimization.

Sensors (Basel, Switzerland)
Block compressed sensing (BCS) is suitable for image sampling and compression in resource-constrained applications. Adaptive sampling methods can effectively improve the rate-distortion performance of BCS. However, adaptive sampling methods bring hig...

Towards Convolutional Neural Network Acceleration and Compression Based on -Means.

Sensors (Basel, Switzerland)
Convolutional Neural Networks (CNNs) are popular models that are widely used in image classification, target recognition, and other fields. Model compression is a common step in transplanting neural networks into embedded devices, and it is often use...

MobilePrune: Neural Network Compression via Sparse Group Lasso on the Mobile System.

Sensors (Basel, Switzerland)
It is hard to directly deploy deep learning models on today's smartphones due to the substantial computational costs introduced by millions of parameters. To compress the model, we develop an ℓ0-based sparse group lasso model called MobilePrune which...

LAP: Latency-aware automated pruning with dynamic-based filter selection.

Neural networks : the official journal of the International Neural Network Society
Model pruning is widely used to compress and accelerate convolutional neural networks (CNNs). Conventional pruning techniques only focus on how to remove more parameters while ensuring model accuracy. This work not only covers the optimization of mod...

Progressive compressive sensing of large images with multiscale deep learning reconstruction.

Scientific reports
Compressive sensing (CS) is a sub-Nyquist sampling framework that has been employed to improve the performance of numerous imaging applications during the last 15 years. Yet, its application for large and high-resolution imaging remains challenging i...

StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.

IEEE transactions on neural networks and learning systems
Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pr...

Communication-efficient federated learning via knowledge distillation.

Nature communications
Federated learning is a privacy-preserving machine learning technique to train intelligent models from decentralized data, which enables exploiting private data by communicating local model updates in each iteration of model learning rather than the ...