#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Accelerated sparsity based reconstruction of compressively sensed multichannel EEG signals


Authors: Muhammad Tayyib aff001;  Muhammad Amir aff001;  Umer Javed aff001;  M. Waseem Akram aff002;  Mussyab Yousufi aff001;  Ijaz M. Qureshi aff003;  Suheel Abdullah aff001;  Hayat Ullah aff001
Authors place of work: Faculty of Engineering and Technology, International Islamic University Islamabad, Islamabad, Pakistan aff001;  Institute of Fundamental and Frontier Science, University of Electronic Science and Technology of China, Chengdu, China aff002;  Department of Electrical Engineering, Air University, Islamabad, Pakistan aff003
Published in the journal: PLoS ONE 15(1)
Category: Research Article
doi: https://doi.org/10.1371/journal.pone.0225397

Summary

Wearable electronics capable of recording and transmitting biosignals can provide convenient and pervasive health monitoring. A typical EEG recording produces large amount of data. Conventional compression methods cannot compress date below Nyquist rate, thus resulting in large amount of data even after compression. This needs large storage and hence long transmission time. Compressed sensing has proposed solution to this problem and given a way to compress data below Nyquist rate. In this paper, double temporal sparsity based reconstruction algorithm has been applied for the recovery of compressively sampled EEG data. The results are further improved by modifying the double temporal sparsity based reconstruction algorithm using schattern-p norm along with decorrelation transformation of EEG data before processing. The proposed modified double temporal sparsity based reconstruction algorithm out-perform block sparse bayesian learning and Rackness based compressed sensing algorithms in terms of SNDR and NMSE. Simulation results further show that the proposed algorithm has better convergence rate and less execution time.

Keywords:

Algorithms – Mathematical functions – Electroencephalography – Signal processing – Man-computer interface – Fourier analysis – Data compression – Compressed sensing

Introduction

In Brain Computer Interface (BCI), a non-muscular connection between computers and human is made to assist in conversion of coded brain signals into external commands [25, 32]. EEG based BCI has shown significant importance in recent years for health-care monitoring, including early detection of seizure, trauma, alzheimer and stroke [29]. Normal EEG signal contain large number of data that cannot be sampled and transmitted during many real life scenarios. Epileptic seizure detection of patients require continuous EEG monitoring that may last upto a number of hours [22]. Saving, processing and transmission of this huge data requires bulk storage and immense processing power [1]. As an example, multichannel EEG recording ranges from 24 to several hundred electrodes. With 24 electrodes, if sampled at 200 HZ using 12 bits of resolution, it produces a data of at least 1 GB per day [8]. Compressive sensing (CS) theory suggests solution to this problem by taking samples far fewer than the Nyquist rate along with faithful recovery [38].

Traditional way of compression discards huge amount of data resulting in a lossy compression. To overcome this issue, signal compression techniques with better sampling patterns have been developed, which enables to store the same amount of data in more compact form [24]. Some methods are matched filters, autocorrelation based euclidian distance, bayesian interface methods, wavelet compression and jpeg 2000 [4]. All these methods need sampling measurements at Nyquist rate using analog to digital converter (ADC), resulting in computationally complex data, high processing time, and expensive hardware. In order to mitigate all these problems, CS provides a promising solution [4]. CS reconstructs from highly under-sampled data, even below the Nyquist rate, discarding the redundant information. This results in a huge reduction in dimension due to less number of measurements. The basic theory of CS rely on two necessary conditions, sparsity and incoherence [14].

The main idea behind CS is that, signal can be represented by only few non-zero coefficients, this is done by using some sparse sensing matrix [11]. For CS, two assumptions are made. Firstly, either data is itself sparse or sparse in some transform domain. Secondly, the measurement basis and representation basis are mutually incoherent [9], this results in a compression below the Nyquist rate. As number of measurements are far fewer than original signal, thus recovering the original signal is an NP-hard problem [23]. Due to non-sparse representation of EEG in time domain, EEG signal is made sparse by using different basis or dictionary functions. There have been many publications indicating dictionaries such as slepian basis and Gabor framework [38]. Hesham [19], presented the CS framework for EEG using dirac sensing matrix, and efficiently reconstructed the EEG signal after compression. Angshul [10], illustrated CS recovery of EEG using 2-D fourier transform, however [38], claimed that better reconstruction can be achieved using wavelet domain instead of Gabor domain. Jun [3], claimed that using daubechies wavelets, the reconstruction accuracy achieved is better than that of other basis functions.

Compressed sensing

Compressed sensing depend on the hypothesis that the signal x is compressed by Φ∈RM×N (sampling or measurement matrix). The sampling model is formulated as,


where y∈RM×1 represent the compressed measurements with M≪N, indicating that number of sampled measurements are far less than the original signal. If x is sparse, then recovery problem only requires the compressed measurements and sampling matrix, but if not, than signal x should be sparse (transformed) in representation matrix (dictionary) Ψ∈RN×P with N≪P. This can be written as,

where θ∈RP×1 is sparse. x can be recovered using measurements y, dictionary Ψ, and sampling matrix Φ. The minimization problem formed is,

where ‖.‖0 is the 0 norm, i.e it counts the number of non-zero entries. The x is called K-sparse when number of non-zero entries are vector equal to K. The task of recovering x from measurements y is an under-determined inverse problem, and finding its solution is NP- hard problem [37], as the sensing matrix Ψ due to large undersampling is ill conditioned. So general problem is regularized in order to achieve recovery. Sparsity regularization is comparatively elementary solution to this problem.

In sparse recovery problem, the expected signal is said to be sparse by representing it with transform Ψ, to regularize it in transformed domain, 1 norm is used as the substitute for 0 norm. Hence, for EEG reconstruction problem, the sparsity regularization is formulated as [21],


Φ represent the basis of sparsifying transform and ‖.‖1 is 1 norm. The minimization of variance of noise is done with fedility term in Eq 4. The regularization term λ1 is added to induce sparsity on x over basis Ψ, and ‖.‖F2 represents the Frobenius norm, which can be defined as,


The proposed method is based on double sparsity based framework. The motivation behind proposed method is to increase the existing reconstruction accuracy of multi-channel EEG signals using following contributions: First, the multi-channel EEG signal is pre-processed using zero mean and whitening transform. The total variation matrix exploited in previous work exhibits redundancy in reconstruction accuracy, so instead of using total variation matrix, proposed algorithm explored the concept of circulant matrix as a sparse sensing matrix. In addition, the shattern-p norm is used as non-convex surrogate function to exploit the double sparsity in CS recovery of multi-channel EEG signals. The flow diagram of proposed method is shown in Fig 1.

Flow diagram of proposed method.
Fig. 1. Flow diagram of proposed method.

The rest of the paper is organized as follows, section III summarize the existing methods used for the reconstruction of compressively sampled EEG signals. Section IV includes the discussion and analysis of proposed method on the basis of quantitative analysis, section V gives the concluding remarks.

Related work

Rackness based compressive sensing

Power consumption during wireless transmission has been focus of many researchers, Nicola et al. [16] worked on solving the power consumption issue by introducing the rackness based compressed sensing. Using rackness based CS, good signal reconstruction of EEG signal can be achieved. Rackness approach is based on assumption that certain signal exhibits non-flat energy distribution. Using this assumption, there is no need to construct Φ from randomly selected i.i.d entries. Instead of constructing randomly, Φ is tuned statistically which match with the input signal. This property increases the average energy of y, which ultimately increases the reconstruction accuracy. To formulate this property, rackness ρ between two stochastic process x and Φ is defined as,


where the static expectation over x and Φ is represented by E(Φ,x), with 〈.〉 as standard inner product, Φj is the sensing sequence, and x is the signal instances. Using rackness based approach, both noise suppression and good signal reconstruction is achieved.

Block sparse bayesian learning

Using field programmable gate array (FPGA), Liu et al. [27] proved that comparing to wavelet domain, block sparse bayesian learing (BSBL) shows prominent results in terms of power consumption and reconstruction accuracy. Using fast marginalized (FM) likelihood method, fast implementation of BSBL was developed in which EEG signal is structured into blocks as shown in Eq (7),


which represents that x has g blocks, with few non-zero blocks and di is the size of ith block. The BSBL uses intra-block structure correlation to model the signal x with Gaussian distribution. The resulting reconstruction is not robust to all methods, as blocking introduces noise, therefore, regularization is done in order to achieve better results.

Simultaneous co-sparsity and low rank

In a very recent work [2], it was discussed that due to correlation in EEG signals, same sparsity pattern is adopted after transformation i.e. the values of transform coefficients have values at same positions leading to row-sparse recovery [2]. This theory was formulated using 2,1 norm minimization problem.


where 2,1 norm is the sum of the rows of 2 norm. In Eq 8, 2 norm gives the dense solution in selected rows, hence sum of 2 norm promotes selection of very few rows.

Blind compressed sensing

The theory of CS relies on the assumption that sparsity of the signal is known in some basis. The Blind Compress Sensing (BCS) [17], avoid this assumption by using both CS and dictionary learning. BCS estimates the sparse signal as well as the sparsifying dictionary from the data. The assumption that data is sparse in learned dictionary, i.e. X = DJ where J are the sparse coefficients and D is the unknown dictionary (to be estimated) [17].


In BCS [17, 35], both signal estimation and dictionary proceeds simultaneously.

Double temporal sparsity based reconstruction

Using double temporal sparsity reconstruction (DTSR), better reconstruction can be achieved with acceleration in time. Priya [28] proposed DTSR for sparse signal recovery of fMRI data with prominent results. In this work we have sued a modified form of DTSR along with some pre-processing for sparse recovery of EEG signal.

The DTSR algorithm uses total variation based algorithm and imposes two 1 norm constraints. First constraint is applied on transformed domain of temporal data and other is imposed on the consecutive difference of same data. The cost function of Eq 4 can be written as,


where λ1, λ2 are positive regularization terms, F is the 2-D Fourier transform while the third term shows the consecutive difference of columns of data. The matrix formulation of a Eq 10 can be shown as

where D shows the consecutive difference on the successive columns of x, known as total variation temporal sparsity shown as

Proposed method

In this section, alternating direction multiplier method (ADMM) is modified on the basis of desired EEG signal recovery problem using DTSR. In the pre-processing of signal x, first of all signal is made zero mean and then it is made white by making each column of signal un-correlated to each other. The unit variance and zero mean can be expressed in mathematical terms as,


where μ is the mean, and σ is the variance of EEG signal x, this is done in fashion that each value of x is subtracted form mean of individual columns of x, and divided by σ resulting EEG signal with zero mean. This zero mean EEG data is made white by using optimal whitening method [30]. The reason behind this step is that close channel in multichannel EEG signal exhibits strong correlation, removing this correlation by making the columns of EEG signal orthogonal to each other results in less search time in optimal sparse signal recovery. This can be shown in Fig 2, where zero-mean and white data becomes stable in less number of iteration than original algorithm. The whitening of zero-mean signal can be shown mathematically as

where d is the dimension of zero-mean vector, W is the d×d whitening matrix. Whitening in general terms can be viewed as [30],

where V is the variance, V=diag(σ12,…,σd2), such that, var(mi)=σi2. The whitening transform should follow, WΣWT = I and thus W(ΣWTW) = W, which is only achieved if W satisfy the constraint

MSE vs iterations.
Fig. 2. MSE vs iterations.
A: Original vs proposed method. B: Comparison with related methods.

Using Mahalanobis whitening method [30], the whitening transform employed in this work is


The difference between zero-mean and whitened data can be seen by number of iterations in Fig 2. Instead of using total variation matrix D used in Eq 11, in this paper we explored the idea of using circulant matrix [13]. For CS, the binary measurement matrices are formed using parity check matrix of array coding. A parity matrix H(r,q) is the identity matrix of dimension r×q and (i,j)-th circulant permutation matrix P(i−1)(j−1) with q as odd prime and r as positive integer, such that 1≤r≤q [36]. The i-th row circulant matrix is formed by cyclic shift of (i+1)-th or (i-1)-th row by one term. This can be shown as,



After making these changes in the cost function of DTSR, the modified version of algorithm can be rewritten as,


Instead of using frobenious norm, which is basically square root of the sum of the absolute squares defined in Eq 11, we have used the schattern-p norm. Schattern-p norm has been successfully used for sparse synthesis model and shows accurate results [3, 7, 18, 33]. Eq 17 can be re-written as


where ‖.‖Spp is the schattern-p norm and it is the sum of all singular values σ of data z upto value p, for matrix T∈Hm and p∈[1,+∞], ‖.‖Spp is defined as,

where Hm is monotone Hilbert space, and σ is the singular values of T in non-increasing order such that σ1(T) ≥ σ2(T)….

To solve Eq 18, optimized solution is obtained using ADMM [6].

Optimization algorithm

Strained optimization problem in recent literature [20, 26, 34]. The ADMM ease the solution by breaking down the original cost function into several objective function, that are comparatively easy to solve.

Following [28], two auxiliary matrices P∈RM×N and Q∈RM×N are introduced in Eq 18 as


By adding these new constraints for each of auxiliary matrices, the objective function formed is


where B1, B2 are lagrange multipliers to satisfy the equality auxiliary and original matrices, and η1, η1 are penalty parameters.

ADMM updates variables P, Q and z alternatively in the above lagrange function. By keeping the other two variables fixed, one variable is minimized in each iteration. Thus, the above function can be decomposed into three sub-problems, with new objective function as,




where j is the number of iterations, A1 and A2 subproblems minimize objective function over P and Q respectively with fixed z. Similarly, A3 minimizes z keeping P and Q fixed. Subproblems A1, A2 and A3 are solved iteratively by updating the lagrange multipliers B1 and B2.

A1 and A2 subproblems

Subproblems A1 and A2 are 1 minimization problem, for general 1 minimization problem as


the solution is [28],

where W is unitary matrix and U,V∈RM×N with α, β > 0. V in Eq 25 is initial approximate of U. Hence for j iteration, the solution for subproblem A1 is

For j iteration P is computed for subproblem A1. Similarly Q is computed for subproblem A2 using zj−1 and B2j−1 as


A3 subproblem

Subproblem A3 is quadratic, to solve it conjugate gradient algorithm is used [12, 15]. In this paper, line search conjugate gradient algorithm is used [28]. This algorithm is iterative in which descent direction is selected on the minimization of function. Further, step size is determined by using line search method. For general quadratic equations, line search conjugate gradient algorithm gives the finite convergence.

Updating lagrange multiplier

In the last step lagrange multiplier is updated iteratively. Lagrange multiplier helps in achieving the convergence in subsequent iteration. The pseudo algorithm of proposed method is shown in Algorithm 1. The convergence is achieved by comparing convergence of objective function with threshold or with maximum number of iterations achieved.

Algorithm 1 Proposed Algorithm

1) INPUT: λ1, λ2, B10, B20, Z0, j = 1

2) while Convergence do not met do

3) Solve P for subproblem A1 using 27

4) Solve Q for subproblem A2 using 28

5) Solve Z for subproblem A3 using 24

6) Updating lagrange multipliers

B1j=B1j−1+ψXj−Pj

B2j=B2j−1+XjH−Qj

7) j = j+1

8) end while

8) OUTPUT: Reconstructed signal Z^

Results and discussion

EEG dataset

The publicly available EEG dataset [5] is used for the purpose of sparse signal recovery of multi-channel EEG signal. This commonly used EEG dataset contain 32 channel EEG signal of length 30720 data points, with each signal of channel contain 80 epochs and 384 points. For compression of each epoch, sensing matrix Φ∈R192×384 is used as sparse circulant matrix and Ψ∈R384×384 is used as Fourier domain sparsifying matrix formed by calculating the Fourier transform of z along each row. To recover EEG signal, ADMM algorithm is used [6].

Quantitative analysis

This section includes the results of the proposed modified DTSR method in comparison with few of the existing CS-based EEG signal reconstruction techniques. For the purpose of reconstruction quality measurement, normalized mean square error (NMSE), mean square error (MSE) and signal to noise distortion ratio (SNDR) are used.

For reference EEG signal x and its reconstructed version x^, the NMSE is computed as


where ‖.‖2 represent the 2-norm. Similarly SNDR is calculated as

MSE can be calculated as

where x and x^ are the reference and reconstructed EEG signals, C is the number of EEG channels, N is the length of epoch, and L is the number of experiments.

Reconstructed NMSE (average), MSE (average), band SNDR (average) for all 32 channel EEG signal is presented in Table 1. The compression rate of 25%(4:1), and 50%(2:1) is used for evaluation. The MSE of multi-channel EEG signal is shown in Fig 2. The proposed method gives best results in less number of iteration than other existing algorithms. Multi channel EEG signal along with its reconstructed versions with different algorithms are shown in Fig 3. EEG multi channel data used in this analysis consists of 32 channels, 384 time series and 80 epochs represented by 32×384×80. The results shown in Fig 3 and Table 1 indicates that the proposed algorithm outperforms in terms of accuracy and execution time as compared to other state of the art algorithms.

Tab. 1. Quantitative measures for sparse signal recovery of EEG signals.
Quantitative measures for sparse signal recovery of EEG signals.
Reconstruction.
Fig. 3. Reconstruction.
Overlayed original and reconstructed EEG signal for duration 0-50secs A: BSBL. B: Rackness. C:Proposed D:Combined.

Conclusion

In this work compressively sampled EEG data is recovered using DTSR algorithm. Conventional DTSR algorithm which was originally designed for fMRI data is tailored for application to EEG sparse recovery by making three main contributions. As a first step pre-processing is done by making the EEG data zero mean and unit variance. Second step is to formulate circulant matrix instead of total variation matrix for limiting the search space for fast convergence of the algorithm. Finally it is shown that instead of frobenius norm, using shattern-p norm yields better reconstruction accuracy. The proposed modified DTSR algorithm outperforms conventional DTSR as well as other state of the art CS recovery techniques in terms of NMSE and SNDR.


Zdroje

1. Villena. A, Tardon. Lorenzo. J, Barbancho. I, Barbancho. A. M, B Elvira., and H Niels. T. “Preprocessing for Lessening the Influence of Eye Artifacts in EEG Analysis,” Applied Sciences, vol. 9, no. 9, pp. 1757, 2019. doi: 10.3390/app9091757

2. Zou. X, Feng. L and Sun. H, “Robust compressive sensing of multichannel EEG signals in the presence of impulsive noise,” Information Sciences, vol. 429, pp. 120–129, 2018. doi: 10.1016/j.ins.2017.11.002

3. Zhu. J and Chen. C, Su. S and Chang. Z, “Compressive Sensing of Multichannel EEG Signals via lq Norm and Schatten- p Norm Regularization,” Mathematical Problems in Engineering, pp. 208–216, 2016.

4. Liu. Y, De-Vos. M and Van-Huffel. S, “Compressed sensing of multichannel EEG signals: The simultaneous cosparsity and low-rank optimization,” IEEE Transactions on Biomedical Engineering, vol. 62, pp. 2055–2061, 2015. doi: 10.1109/TBME.2015.2411672

5. Delorme. A and Makeig. S, “EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, pp. 9–21, 2004. doi: 10.1016/j.jneumeth.2003.10.009

6. Zheng. Q, Zhu. F and Heng. P. A, “Robust support matrix machine for single trial EEG classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 3, pp. 551–562, 2018. doi: 10.1109/TNSRE.2018.2794534

7. Nie. F, Huang. H and Ding. C, “Low-Rank Matrix Recovery via Efficient Schatten p-Norm Minimization,” AAAI Conference on Artificial Intelligence, pp. 655–661, 2012.

8. Ma. T, Li. H, Yang. H, Lv. X, Li. P, Liu. T and Yao. D, “The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing,” Journal of Neuroscience Methods, vol. 275, pp. 80–92, 2017. doi: 10.1016/j.jneumeth.2016.11.002

9. Zhao. W, Sun. B, Wu. T and Yang. Z, “Compressed sensing,EEG,VLSI,data compression,sparse sensing matrix,spike sorting,wireless neural interface,” IEEE Transactions on Biomedical Circuits and Systems, vol. 12, no. 1, pp. 242–254, 2018.

10. Majumdar. A, Shukla. A and Ward. R, “Combining Sparsity with Rank-Deficiency for Energy Efficient EEG Sensing and Transmission over Wireless Body Area Network,” ICASSP 2015, 2015.

11. Fan Y. R, Huang. T. Z, Liu. J and Zhao. X. L, “Compressive sensing via nonlocal smoothed rank function,” PLoS ONE, vol. 11, no. 9, pp. 1–15, 2016. doi: 10.1371/journal.pone.0162041

12. Eftekhari. A, Vandereycken. B, Vilmart. G and Zygalakis. K, “Explicit Stabilised Gradient Descent for Faster Strongly Convex Optimisation,” arXiv preprint arXiv:1805.07199, 2018.

13. Liu. X. J and Xia. S. T, “Constructions of quasi-cyclic measurement matrices based on array codes,” IEEE International Symposium on Information Theory—Proceedings, vol. 34, no. 1, pp. 479–483, 2013.

14. Capurro. I, Lecumberry. F, Martin. A, Ramirez. I, Rovira. E and Seroussi. G, “Efficient Sequential Compression of Multichannel Biomedical Signals,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 4, pp. 904–616, 2017. doi: 10.1109/JBHI.2016.2582683

15. Yuan. G and Hu. W, “A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations,” Journal of Inequalties and Applications, vol. 113, no. 4, pp. 1–19, Feb. 2018.

16. Bertoni. N, Senecirathna. B, Pareschi. F, Mangia. M, Rovatti. R, Abshire. P, Simon. J and Setti. G, “Low-power EEG monitor based on Compressed Sensing with Compressed Domain Noise Rejection,” IEEE International Symposium Circuits Systems, pp. 522-525, 2016.

17. Cisotto. G, Guglielmi. A. V, Badia. L and Zanella. A, “Joint compression of EEG and EMG signals for wireless biometrics,” 2018 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, 2018.

18. Xie. Y, Gu. S, Liu. Y, Zuo. W, Zhang. W and Zhang. L, “Low rank,low-level vision,weighted Schatten p-norm,” IEEE Transactions on Image Processing, vol. 25, no. 107, pp. 4842–48575, 2016.

19. Mahrous. H and Ward. R, “A Low Power Dirac Basis Compressed Sensing Framework for EEG using a Meyer Wavelet Function Dictionary” IEEE Canadian Conference on Electrical and Computer Engineering, 2016.

20. Peng G. J, “Adaptive ADMM for Dictionary Learning in Convolutional Sparse Representation,” IEEE Transactions on Image Processing, 2019. doi: 10.1109/TIP.2019.2896541

21. Alcarez. G. D, Favaro. F, Lecumberry. F, Martin. A, Oliver. J. P, Oreggioni. J, Ramirez. I, Seroussi. G and Steinfeld. L, “Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless,” IEEE Transactions on Biomedical Circuits and Systems, vol. 12, no. 1, pp. 231–241, 2018.

22. Schetinin. V and Jakaite. L, “Extraction of features from sleep EEG for Bayesian assessment of brain development,” PLoS ONE, vol. 12, no. 3, pp. 1–13, 2017. doi: 10.1371/journal.pone.0174027

23. Zhang. Y, Wabg. Y, Jin. J and Wang. X, “Sparse Bayesian Learning for Obtaining Sparsity of EEG Frequency Bands Based Feature Vectors in Motor Imagery Classification,” International Journal of Neural Systems, vol. 26, 2016.

24. Zhang. J, Li. Y, Gu. Z, and Yu. Z. L, “Recoverability analysis for modified compressive sensing with partially known support,” PLoS ONE, vol. 60, no. 1, pp. 221–224, 2013.

25. Zhang. Z, Jung. T. P, Makeig. S and Rao. B. D, “Compressed Sensing of EEG for Wireless Telemonitoring with Low Energy Consumption and Inexpensive Hardware,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 1, pp. 221–224, 2012. doi: 10.1109/TBME.2012.2217959

26. Minaee. S and Wang. Y, “An ADMM Approach to Masked Signal Decomposition Using Subspace Representation,” IEEE Transactions on Image Processing, 2019. doi: 10.1109/TIP.2019.2894966 30703020

27. Liu. B, Zhang. Z, Xu. G, Fan. H and Fu. Q, “Energy efficient telemonitoring of physiological signals via compressed sensing: A fast algorithm and power consumption evaluation,” Biomedical Signal Processing and Control, vol. 11, pp. 80–88, Feb. 2014. doi: 10.1016/j.bspc.2014.02.010

28. Aggarwal. P and Gupta. A, “Double temporal sparsity based accelerated reconstruction of compressively sensed resting-state fMRI,” Computers in Biology and Medicine, vol. 91, pp. 255–266, 2017. doi: 10.1016/j.compbiomed.2017.10.020

29. Christensen. C. B, Harte. J. M, Lunner. T and Kidmose. P, “Ear-EEG-Based Objective Hearing Threshold Estimation Evaluated on Normal Hearing Subjects,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 5, pp. 1026–1034, 2018. doi: 10.1109/TBME.2017.2737700

30. Kessy. A, Lewin. A and Strimmer. K, “Optimal Whitening and Decorrelation,” The American Statistician, vol. 72, no. 4, pp. 309–314, 2018. doi: 10.1080/00031305.2016.1277159

31. Mangia. M, Pareschi. F, Cambareri. V, Rovatti. R and Setti. G, “Rakeness-Based Design of Low-Complexity Compressed Sensing,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 64, no. 5, pp. 1201–1213, 2017. doi: 10.1109/TCSI.2017.2649572

32. Mehmood. R. M, Du. R and Lee. H, “Optimal Feature Selection and Deep Learning Ensembles Method for Emotion Recognition From Human Brain EEG Sensors,” IEEE Access, pp. 14797–14806, 2017. doi: 10.1109/ACCESS.2017.2724555

33. Xia. D and Koltchinskii. V, “Estimation of low rank density matrices: Bounds in Schatten norms and other distances,” Electronic Journal of Statistics, vol. 10, no. 2, pp. 2717–2745, 2016. doi: 10.1214/16-EJS1192

34. Zhang. C, Ahmad. M and Wang. Y, “ADMM Based Privacy-Preserving Decentralized Optimization,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 3, pp. 565–580, 1998. doi: 10.1109/TIFS.2018.2855169

35. Shukla. A and Majumdar. A, “Row-sparse Blind Compressed Sensing for Reconstructing Multi-channel EEG Signals,” Biomedical Signal Processing and Control, vol. 18, pp. 174–178, 2015. doi: 10.1016/j.bspc.2014.09.003

36. Sun. B, Chen. Q, Xu. X, He. Y and Jiang. J, “Permuted and Filtered Spectrum Compressive Sensing,” IEEE Signal Processing Letters, vol. 20, no. 7, pp. 685–688, 2013. doi: 10.1109/LSP.2013.2258464

37. Marques. E. C, Miciel. N, Naviner. L, Cai. H. A. O and Yang. J. U. N, “A Review of Sparse Recovery Algorithms,” IEEE Access, 2019.

38. Craven. D, Mcginley. B, Kilmartin. L, Glavin. M and Jones. E, “Compressed Sensing for Bioelectric Signals: A Review,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 2, pp. 529–540, 2015. doi: 10.1109/JBHI.2014.2327194


Článek vyšel v časopise

PLOS One


2020 Číslo 1
Nejčtenější tento týden
Nejčtenější v tomto čísle
Kurzy Podcasty Doporučená témata Časopisy
Přihlášení
Zapomenuté heslo

Zadejte e-mailovou adresu, se kterou jste vytvářel(a) účet, budou Vám na ni zaslány informace k nastavení nového hesla.

Přihlášení

Nemáte účet?  Registrujte se

#ADS_BOTTOM_SCRIPTS#