On the Generalized Strongly Nil-Clean Property of Matrix Rings

2021 ◽  
Vol 28 (04) ◽  
pp. 625-634
Author(s):  
Aleksandra S. Kostić ◽  
Zoran Z. Petrović ◽  
Zoran S. Pucanović ◽  
Maja Roslavcev

Let [Formula: see text] be an associative unital ring and not necessarily commutative. We analyze conditions under which every [Formula: see text] matrix [Formula: see text] over [Formula: see text] is expressible as a sum [Formula: see text] of (commuting) idempotent matrices [Formula: see text] and a nilpotent matrix [Formula: see text].

2020 ◽  
Vol 18 (1) ◽  
pp. 182-193
Author(s):  
He Yuan ◽  
Liangyun Chen

Abstract Let R be a subset of a unital ring Q such that 0 ∈ R. Let us fix an element t ∈ Q. If R is a (t; d)-free subset of Q, then Tn(R) is a (t′; d)-free subset of Tn(Q), where t′ ∈ Tn(Q), $\begin{array}{} t_{ll}' \end{array} $ = t, l = 1, 2, …, n, for any n ∈ N.


2016 ◽  
Vol 15 (09) ◽  
pp. 1650173 ◽  
Author(s):  
G. Cǎlugǎreanu ◽  
T. Y. Lam

A nonzero ring is said to be fine if every nonzero element in it is a sum of a unit and a nilpotent element. We show that fine rings form a proper class of simple rings, and they include properly the class of all simple artinian rings. One of the main results in this paper is that matrix rings over fine rings are always fine rings. This implies, in particular, that any nonzero (square) matrix over a division ring is the sum of an invertible matrix and a nilpotent matrix.


2020 ◽  
Vol 12 (06) ◽  
pp. 2050074
Author(s):  
Yangjiang Wei ◽  
Heyan Xu ◽  
Linhua Liang

In this paper, we investigate the linear dynamical system [Formula: see text], where [Formula: see text] is the ring of integers modulo [Formula: see text] ([Formula: see text] is a prime). In order to facilitate the visualization of this system, we associate a graph [Formula: see text] on it, whose nodes are the points of [Formula: see text], and for which there is an arrow from [Formula: see text] to [Formula: see text], when [Formula: see text] for a fixed [Formula: see text] matrix [Formula: see text]. In this paper, the in-degree of each node in [Formula: see text] is obtained, and a complete description of [Formula: see text] is given, when [Formula: see text] is an idempotent matrix, or a nilpotent matrix, or a diagonal matrix. The results in this paper generalize Elspas’ [1959] and Toledo’s [2005].


2010 ◽  
Vol 09 (05) ◽  
pp. 717-724 ◽  
Author(s):  
VLADIMIR M. LEVCHUK ◽  
OKSANA V. RADCHENKO

Derivations of the ring of all finitary niltriangular matrices over an arbitrary associative ring with identity for any chain of matrix indices are described. Every Lie or Jordan derivation is a derivation of this ring modulo third hypercenter.


Author(s):  
Grigore Călugăreanu ◽  
Yiqiang Zhou

An idempotent in a ring is called fine (see G. Călugăreanu and T. Y. Lam, Fine rings: A new class of simple rings, J. Algebra Appl. 15(9) (2016) 18) if it is a sum of a nilpotent and a unit. A ring is called an idempotent-fine ring (briefly, an [Formula: see text] ring) if all its nonzero idempotents are fine. In this paper, the properties of [Formula: see text] rings are studied. A notable result is proved: The diagonal idempotents [Formula: see text] ([Formula: see text]) are fine in the matrix ring [Formula: see text] for any unital ring [Formula: see text] and any positive integer [Formula: see text]. This yields many classes of rings over which matrix rings are [Formula: see text].


2012 ◽  
Vol 12 (01) ◽  
pp. 1250140
Author(s):  
GEORGY P. EGORYCHEV ◽  
FERIDE KUZUCUOĞLU ◽  
VLADIMIR M. LEVCHUK

Using the method of integral representation of combinatorial sums we enumerate ideals of certain nilpotent matrix rings.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2011 ◽  
Vol 81 (4) ◽  
pp. 529-537 ◽  
Author(s):  
Alexandre G. Patriota ◽  
Gauss M. Cordeiro

2015 ◽  
Vol 93 (2) ◽  
pp. 186-193 ◽  
Author(s):  
MASANOBU KANEKO ◽  
MIKA SAKATA

We give three identities involving multiple zeta values of height one and of maximal height: an explicit formula for the height-one multiple zeta values, a regularised sum formula and a sum formula for the multiple zeta values of maximal height.


Sign in / Sign up

Export Citation Format

Share Document