Background Gastrointestinal (GI) functions are controlled by the enteric nervous system (ENS) in vertebrates, but data on snakes are scarce, as most studies were done in mammals. However, the feeding of many snakes, including Crotalus atrox , is in strong contrast with mammals, as it consumes an immense, intact prey that is forwarded, stored, and processed by the GI tract. We performed immunohistochemistry in different regions of the GI tract to assess the neuronal density and to quantify cholinergic, nitrergic, and VIPergic enteric neurons. We recorded motility patterns and determined the role of different neurotransmitters in the control of motility. Neuroimaging experiments complemented motility findings. Results A well-developed ganglionated myenteric plexus (MP) was found in the oesophagus, stomach, and small and large intestines. In the submucous plexus (SMP) most neurons were scattered individually without forming ganglia. The lowest number of neurons was present in the SMP of the proximal colon, while the highest was in the MP of the oesophagus. The total number of neurons in the ENS was estimated to be approx. 1.5 million. In all regions of the SMP except for the oesophagus more nitric oxide synthase+ than choline-acetyltransferase (ChAT)+ neurons were counted, while in the MP ChAT+ neurons dominated. In the SMP most nerve cells were VIP+, contrary to the MP, where numerous VIP+ nerve fibers but hardly any VIP+ neuronal cell bodies were seen. Regular contractions were observed in muscle strips from the distal stomach, but not from the proximal stomach or the colon. We identified acetylcholine as the main excitatory and nitric oxide as the main inhibitory neurotransmitter. Furthermore, 5-HT and dopamine stimulated, while VIP and the ß-receptor-agonist isoproterenol inhibited motility. ATP had only a minor inhibitory effect. Nerve-evoked contractile responses were sodium-dependent, insensitive to tetrodotoxin (TTX), but sensitive to lidocaine, supported by neuroimaging experiments. Conclusions The structure of the ENS, and patterns of gastric and colonic contractile activity of Crotalus atrox are strikingly different from mammalian models. However, the main excitatory and inhibitory pathways appear to be conserved. Future studies have to explore how the observed differences are an adaptation to the particular feeding strategy of the snake.
The ammonium nitrate (AN) and fuel oil (FO) mixture known as ANFO is a typical representative of non-ideal explosives. In contrast to ideal explosives, the detonation behavior of ANFO exhibits a strong dependence on charge diameter, existence, and properties of confinement, with a large failure diameter and long distance required to establish steady-state detonation. In this study shock initiation and propagation of detonation in ANFO were studied experimentally by determining the detonation velocity at different distances from the initiation point, as well as by numerical modeling using AUTODYN hydrodynamics code and a Wood–Kirkwood detonation model incorporated into EXPLO5 thermochemical code. The run-to-steady-state detonation velocity distance was determined as a function of charge diameter, booster charge mass, and confinement. It was demonstrated that a Lee–Tarver ignition and growth reactive flow model with properly calibrated rate constants was capable of correctly ascertaining experimentally observed shock initiation behavior and propagation of detonation in ANFO, as well as the effects of charge diameter, booster mass, and confinement.
We provide a new efficient version of the backpropagation algorithm, specialized to the case where the weights of the neural network being trained are sparse. Our algorithm is general, as it applies to arbitrary (unstructured) sparsity and common layer types (e.g., convolutional or linear). We provide a fast vectorized implementation on commodity CPUs, and show that it can yield speedups in end-to-end runtime experiments, both in transfer learning using already-sparsified networks, and in training sparse networks from scratch. Thus, our results provide the first support for sparse training on commodity hardware.
In this paper we are interested to model quantum signal by statistical signal processing methods. The Gaussian distribution has been considered for the input quantum signal as Gaussian state have been proven to a type of important robust state and most of the important experiments of quantum information are done with Gaussian light. Along with that a joint noise model has been invoked, and followed by a received signal model has been formulated by using convolution of transmitted signal and joint quantum noise to realized theoretical achievable capacity of the single quantum link. In joint quantum noise model we consider the quantum Poisson noise with classical Gaussian noise. We compare the capacity of the quantum channel with respect to SNR to detect its overall tendency. In this paper we use the channel equation in terms of random variable to investigate the quantum signals and noise model statistically. These methods are proposed to develop Quantum statistical signal processing and the idea comes from the statistical signal processing.
In this article, we are proposing a closed-form solution for the capacity of the single quantum channel. The Gaussian distributed input has been considered for the analytical calculation of the capacity. In our previous couple of papers, we invoked models for joint quantum noise and the corresponding received signals; in this current research, we proved that these models are Gaussian mixtures distributions. In this paper, we showed how to deal with both of cases, namely (I)the Gaussian mixtures distribution for scalar variables and (II) the Gaussian mixtures distribution for random vectors. Our target is to calculate the entropy of the joint noise and the entropy of the received signal in order to calculate the capacity expression of the quantum channel. The main challenge is to work with the function type of the Gaussian mixture distribution. The entropy of the Gaussian mixture distributions cannot be calculated in the closed-form solution due to the logarithm of a sum of exponential functions. As a solution, we proposed a lower bound and a upper bound for each of the entropies of joint noise and the received signal, and finally upper inequality and lower inequality lead to the upper bound for the mutual information and hence the maximum achievable data rate as the capacity. In this paper reader will able to visualize an closed-form capacity experssion which make this paper distinct from our previous works. These capacity experssion and coresses ponding bounds are calculated for both the cases: the Gaussian mixtures distribution for scalar variables and the Gaussian mixtures distribution for random vectors as well.
In this paper we present an averaging technique applicable to the design of zeroth-order Nash equilibrium seeking algorithms. First, we propose a multi-timescale discrete-time averaging theorem that requires only that the equilibrium is semi-globally practically stabilized by the averaged system, while also allowing the averaged system to depend on ``fast"states. Furthermore, sequential application of the theorem is possible, which enables its use for multi-layer algorithm design. Second, we apply the aforementioned averaging theorem to prove semi-global practical convergence of the zeroth-order information variant of the discrete-time projected pseudogradient descent algorithm, in the context of strongly monotone, constrained Nash equilibrium problems. Third, we use the averaging theory to prove the semi-global practical convergence of the asynchronous pseudogradient descent algorithm to solve strongly monotone unconstrained Nash equilibrium problems. Lastly, we apply the proposed asynchronous algorithm to the connectivity control problem in multi-agent systems.
This case report describes a diagnosis of Paget-Schroetter syndrome in a man in his 50s with a network of small veins in the left infraclavicular region discovered after unsuccessful left subclavian vein puncture.
Measurements of the suppression and correlations of dijets is performed using 3 $\mu$b$^{-1}$ of Xe+Xe data at $\sqrt{s_{\mathrm{NN}}} = 5.44$ TeV collected with the ATLAS detector at the LHC. Dijets with jets reconstructed using the $R=0.4$ anti-$k_t$ algorithm are measured differentially in jet $p_{\text{T}}$ over the range of 32 GeV to 398 GeV and the centrality of the collisions. Significant dijet momentum imbalance is found in the most central Xe+Xe collisions, which decreases in more peripheral collisions. Results from the measurement of per-pair normalized and absolutely normalized dijet $p_{\text{T}}$ balance are compared with previous Pb+Pb measurements at $\sqrt{s_{\mathrm{NN}}} =5.02$ TeV. The differences between the dijet suppression in Xe+Xe and Pb+Pb are further quantified by the ratio of pair nuclear-modification factors. The results are found to be consistent with those measured in Pb+Pb data when compared in classes of the same event activity and when taking into account the difference between the center-of-mass energies of the initial parton scattering process in Xe+Xe and Pb+Pb collisions. These results should provide input for a better understanding of the role of energy density, system size, path length, and fluctuations in the parton energy loss.
The effectiveness of the e-tax system in encouraging tax compliance has been largely unexplored. Thus, the current study aims to examine the interrelationship between technological predictors in explaining tax compliance intention among certified tax professionals. Based on the literature on information system success and tax compliance intention, this paper proposed an expanded conceptual framework that incorporates convenience and perception of reduced compliance costs as predictors and satisfaction as a mediator. The data were collected from 650 tax professionals who used e-Filing and 492 who used e-Form through an online survey and analyzed using hierarchical multiple regression. The empirical results suggest that participants’ perceived service quality of e-Filing services and perceptions of reduced compliance costs positively influence users’ willingness to comply with tax regulations. The latter predictor is also, and only, significant among e-Form users. The empirical results also provide statistical evidence for the mediating role of satisfaction in the relationship between all predictors and tax compliance intention. This study encourages tax policymakers and e-tax filing providers to improve their services to increase user satisfaction and tax compliance.
Background Immunocompromised patients with acute diverticulitis are at increased risk of morbidity and mortality. The aim of this study was to compare clinical presentations, types of treatment, and outcomes between immunocompromised and immunocompetent patients with acute diverticulitis. Methods We compared the data of patients with acute diverticulitis extracted from the Web-based International Registry of Emergency Surgery and Trauma (WIRES-T) from January 2018 to December 2021. First, two groups were identified: medical therapy (A) and surgical therapy (B). Each group was divided into three subgroups: nonimmunocompromised (grade 0), mildly to moderately (grade 1), and severely immunocompromised (grade 2). Results Data from 482 patients were analyzed—229 patients (47.5%) [M:F = 1:1; median age: 60 (24–95) years] in group A and 253 patients (52.5%) [M:F = 1:1; median age: 71 (26–94) years] in group B. There was a significant difference between the two groups in grade distribution: 69.9% versus 38.3% for grade 0, 26.6% versus 51% for grade 1, and 3.5% versus 10.7% for grade 2 ( p < 0.00001). In group A, severe sepsis ( p = 0.027) was more common in higher grades of immunodeficiency. Patients with grade 2 needed longer hospitalization ( p = 0.005). In group B, a similar condition was found in terms of severe sepsis ( p = 0.002), quick Sequential Organ Failure Assessment score > 2 ( p = 0.0002), and Mannheim Peritonitis Index ( p = 0.010). A Hartmann’s procedure is mainly performed in grades 1–2 ( p < 0.0001). Major complications increased significantly after a Hartmann’s procedure ( p = 0.047). Mortality was higher in the immunocompromised patients (p = 0.002). Conclusions Immunocompromised patients with acute diverticulitis present with a more severe clinical picture. When surgery is required, immunocompromised patients mainly undergo a Hartmann’s procedure. Postoperative morbidity and mortality are, however, higher in immunocompromised patients, who also require a longer hospital stay.
The breakthrough performance of large language models (LLMs) comes with major computational footprints and high deployment costs. In this paper, we progress towards resolving this problem by proposing a novel structured compression approach for LLMs, called ZipLM. ZipLM achieves state-of-the-art accuracy-vs-speedup, while matching a set of desired target runtime speedups in any given inference environment. Specifically, given a model, a dataset, an inference environment, as well as a set of speedup targets, ZipLM iteratively identifies and removes components with the worst loss-runtime trade-off. Unlike prior methods that specialize in either the post-training/one-shot or the gradual compression setting, and only for specific families of models such as BERT (encoder) or GPT (decoder), ZipLM produces state-of-the-art compressed models across all these settings. Furthermore, ZipLM achieves superior results for a fraction of the computational cost relative to prior distillation and pruning techniques, making it a cost-effective approach for generating an entire family of smaller, faster, and highly accurate models, guaranteed to meet the desired inference specifications. In particular, ZipLM outperforms all prior BERT-base distillation and pruning techniques, such as CoFi, MiniLM, and TinyBERT. Moreover, it matches the performance of the heavily optimized MobileBERT model, obtained via extensive architecture search, by simply pruning the baseline BERT-large model. When compressing GPT2, ZipLM outperforms DistilGPT2 while being 60% smaller and 30% faster. Our code is available at: https://github.com/IST-DASLab/ZipLM.
Introduction The presence of focal cortical and white matter damage in patients with multiple sclerosis (pwMS) might lead to specific alterations in brain networks that are associated with cognitive impairment. We applied microstructure-weighted connectomes to investigate (i) the relationship between global network metrics and information processing speed in pwMS, and (ii) whether the disruption provoked by focal lesions on global network metrics is associated to patients’ information processing speed. Materials and methods Sixty-eight pwMS and 92 healthy controls (HC) underwent neuropsychological examination and 3T brain MRI including multishell diffusion (dMRI), 3D FLAIR, and MP2RAGE. Whole-brain deterministic tractography and connectometry were performed on dMRI. Connectomes were obtained using the Spherical Mean Technique and were weighted for the intracellular fraction. We identified white matter lesions and cortical lesions on 3D FLAIR and MP2RAGE images, respectively. PwMS were subdivided into cognitively preserved (CPMS) and cognitively impaired (CIMS) using the Symbol Digit Modalities Test (SDMT) z-score at cut-off value of −1.5 standard deviations. Statistical analyses were performed using robust linear models with age, gender, and years of education as covariates, followed by correction for multiple testing. Results Out of 68 pwMS, 18 were CIMS and 50 were CPMS. We found significant changes in all global network metrics in pwMS vs HC (p < 0.05), except for modularity. All global network metrics were positively correlated with SDMT, except for modularity which showed an inverse correlation. Cortical, leukocortical, and periventricular lesion volumes significantly influenced the relationship between (i) network density and information processing speed and (ii) modularity and information processing speed in pwMS. Interestingly, this was not the case, when an exploratory analysis was performed in the subgroup of CIMS patients. Discussion Our study showed that cortical (especially leukocortical) and periventricular lesions affect the relationship between global network metrics and information processing speed in pwMS. Our data also suggest that in CIMS patients increased focal cortical and periventricular damage does not linearly affect the relationship between network properties and SDMT, suggesting that other mechanisms (e.g. disruption of local networks, loss of compensatory processes) might be responsible for the development of processing speed deficits.
Objective Published reports describing awareness and knowledge of familial hypercholesterolemia (FH) among pediatricians are few and differ considerably across countries. We aimed to assess awareness and knowledge of the FH among pediatricians in Serbia. Methods A web-based cross-sectional study using a self-designed questionnaire was conducted during the annual congress of the Serbian Association of Preventive Pediatrics in 2020. Results A total of 141 pediatricians completed the questionnaire (response rate 16.1%). Overall, 91% of participants have knowledge about genetic inheritance of FH, 84.3% were aware of long-term health risks of FH, 77% were familiar with normal cholesterol values in children and 71% knew the FH prevalence in the general population. On the other hand, only 36.8% declared that they were familiar with international guidelines for FH drug treatment and only 26.2% declared to have patients with FH. Conclusion There is a substantial lack of practical clinical knowledge among Serbian pediatricians on managing children with FH. In addition, an extremely low questionnaire response rate (16.1%) suggests that most pediatricians are not aware of the clinical importance of FH in childhood.
Portal vein aneurysm (PVA) is a rare vascular abnormality, representing 3% of all venous aneurysms in the human body, and is not well understood. It can be congenital or acquired, located mainly at the level of confluence, main trunk, branches and bifurcation. A PVA as an abnormality of the portal venous system was first reported in 1956 by Barzilai and Kleckner. A review from 2015 entitled “Portal vein aneurysm: What to know” considered fewer than 200 cases. In the last seven years, there has been an increase in the number of PVAs diagnosed thanks to routine abdominal imaging. The aim of this review is to provide a comprehensive update of PVA, including aetiology, epidemiology, and clinical assessment, along with an evaluation of advanced multimodal imaging features of aneurysm and management approaches.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više