AIMC Topic: Data Compression

Clear Filters Showing 51 to 60 of 147 articles

Low-Complexity Adaptive Sampling of Block Compressed Sensing Based on Distortion Minimization.

Sensors (Basel, Switzerland)
Block compressed sensing (BCS) is suitable for image sampling and compression in resource-constrained applications. Adaptive sampling methods can effectively improve the rate-distortion performance of BCS. However, adaptive sampling methods bring hig...

Towards Convolutional Neural Network Acceleration and Compression Based on -Means.

Sensors (Basel, Switzerland)
Convolutional Neural Networks (CNNs) are popular models that are widely used in image classification, target recognition, and other fields. Model compression is a common step in transplanting neural networks into embedded devices, and it is often use...

MobilePrune: Neural Network Compression via Sparse Group Lasso on the Mobile System.

Sensors (Basel, Switzerland)
It is hard to directly deploy deep learning models on today's smartphones due to the substantial computational costs introduced by millions of parameters. To compress the model, we develop an ℓ0-based sparse group lasso model called MobilePrune which...

LAP: Latency-aware automated pruning with dynamic-based filter selection.

Neural networks : the official journal of the International Neural Network Society
Model pruning is widely used to compress and accelerate convolutional neural networks (CNNs). Conventional pruning techniques only focus on how to remove more parameters while ensuring model accuracy. This work not only covers the optimization of mod...

Progressive compressive sensing of large images with multiscale deep learning reconstruction.

Scientific reports
Compressive sensing (CS) is a sub-Nyquist sampling framework that has been employed to improve the performance of numerous imaging applications during the last 15 years. Yet, its application for large and high-resolution imaging remains challenging i...

StructADMM: Achieving Ultrahigh Efficiency in Structured Pruning for DNNs.

IEEE transactions on neural networks and learning systems
Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pr...

Communication-efficient federated learning via knowledge distillation.

Nature communications
Federated learning is a privacy-preserving machine learning technique to train intelligent models from decentralized data, which enables exploiting private data by communicating local model updates in each iteration of model learning rather than the ...

SensiMix: Sensitivity-Aware 8-bit index & 1-bit value mixed precision quantization for BERT compression.

PloS one
Given a pre-trained BERT, how can we compress it to a fast and lightweight one while maintaining its accuracy? Pre-training language model, such as BERT, is effective for improving the performance of natural language processing (NLP) tasks. However, ...

Deep-learning-based projection-domain breast thickness estimation for shape-prior iterative image reconstruction in digital breast tomosynthesis.

Medical physics
BACKGROUND: Digital breast tomosynthesis (DBT) is a technique that can overcome the shortcomings of conventional X-ray mammography and can be effective for the early screening of breast cancer. The compression of the breast is essential during the DB...

Skeleton-Based Spatio-Temporal U-Network for 3D Human Pose Estimation in Video.

Sensors (Basel, Switzerland)
Despite the great progress in 3D pose estimation from videos, there is still a lack of effective means to extract spatio-temporal features of different granularity from complex dynamic skeleton sequences. To tackle this problem, we propose a novel, s...