Abstract

The nonlinear spiking neural P systems (NSNP systems) are new types of computation models, in which the state of neurons is represented by real numbers, and nonlinear spiking rules handle the neuron’s firing. In this work, in order to improve computing performance, the weights and delays are introduced to the NSNP system, and universal nonlinear spiking neural P systems with delays and weights on synapses (NSNP-DW) are proposed. Weights are treated as multiplicative constants by which the number of spikes is increased when transiting across synapses, and delays take into account the speed at which the synapses between neurons transmit information. As a distributed parallel computing model, the Turing universality of the NSNP-DW system as number generating and accepting devices is proven. 47 and 43 neurons are sufficient for constructing two small universal NSNP-DW systems. The NSNP-DW system solving the Subset Sum problem is also presented in this work.

1. Introduction

Membrane computing (MC) is a representative of a new type of computing, abstracted from the phenomenon of signal transmission between cells in animals. It was proposed by Gheorghe Păun in 1998 and published in 2000 [1]. As a new type of natural computing, membrane computing has abundant model support [24] and is widely used in real life [5, 6]. The distributed computing model is named after the membrane system or P system. So far, P systems are mainly divided into three categories: cell-like P systems [7, 8], tissue-like P systems [9, 10], and neural-like P systems. Spiking neural P (SNP) systems [11], axon P systems [12], and dendrite P systems [13] are widely studied types of neural-like P systems. The research on SNP systems has been abundant for more than ten years. Similar to the spiking neural network (SNN), in the SNP system, neurons are activated, and the spikes are transmitted to other neurons along the synapse. SNP systems encode information through spikes in neurons. Intuitively, SNP systems are represented by a directed graph, where the nodes in the graph represent neurons, and the neurons are connected by arcs representing synapses. Neurons contain spikes and rules; two kinds of rules are used in SNP systems: firing rules and forgetting rules . Generating, accepting, and functional computing are the three working modes of SNP systems. It is worth mentioning that, by introducing neuron division, budding, or dissolution, the SNP system has been proven to be able to solve some computationally difficult problems, like SAT problem and Subset Sum problem [14, 15]. SNP can also solve many practical problems such as fault diagnosis of power systems [1618], image processing [19, 20], and combination optimization problem [21]. Based on those innovative works, more and more scholars pay attention to SNP systems. Many variants of SNP systems have been proposed, and their computing power has also been proven.

The generation of the SNP system itself is affected by the spike signal in biological phenomena. From the perspective of biological facts, Păun et al. considered another important component related to brain function, astrocytes, which implicitly control the number of spikes along the axon. The functional application of astrocytes in the SNP model was first introduced in 2007 [22]. In 2012, Pan et al. formally proposed SNP systems with astrocytes [23]. More discussions about this type of system have been formed, such as [24, 25]. For such systems, astrocytes influence computation by controlling the transmission of spikes on synapses. Considering the difference in the number of synapses connected between neurons, Pan et al. [26] previously proposed spiking neural P systems with weighted synapses. Then considering the mutual annihilation between spikes, Pan and Păun [27] used a pair of antispikes to explain the SNP system with antispikes. In the initial SNP system, there was a unique synapse connection between two neurons. Based on biological facts, Peng et al. [28] proposed that the synapse of each neuron can have an indefinite number of channels connected to subsequent neurons, and the SNP system with multiple channels was reasonably verified. Later, for the phenomenon that neurons carry positive and negative ions, the polarized SNP system was studied [29, 30]. In view of this neurophysiological observation, the speed of information transmission on the synapse is different. Song et al. [31] proposed spiking neural P systems with delay on synapses (SNP-DS) in 2020. On the other hand, driven by the ideas of mathematics and computing science, many variants of SNP systems make the SNP computing model with a parallel mechanism more powerful, for example, a complete arithmetic calculator constructed from SNP systems [32], SNP systems with random application of rules by introducing probability [33], homogeneous SNP systems with structural plasticity by adding and deleting connections between neurons [34], SNP systems with colored spikes by the idea of colored Petri nets [35], SNP systems with communication on request by using the parallel-cooperating grammar system to communicate on request [36], SNP systems with learning functions used to identify letters and symbols [37], and numerical SNP systems inspired by the numerical nature of the numerical P (NP) systems [38].

For SNP systems and variants mentioned above, the number of spikes in neurons is in integer form. So the nonlinear spiking neural P (NSNP) systems were investigated by Peng et al. [39]. In this system, the nonnegative integer state of neurons is replaced by real numbers; the number of spikes consumed and generated is replaced by nonlinear functions. However, NSNP systems need 117 and 164 neurons to construct Turing universal systems as functional computing device and number generator, respectively. For a new heuristic computing model such as this one, computing power is an important measure. At the same time, as a computational model of exchanging space for time, the computing power of NSNP systems needs to be further improved.

In this work, for the purpose of greater calculation power of NSNP systems, inspired by [26, 31], a novel P system, the nonlinear spiking neural P systems with delays and weights on synapses are proposed, abbreviated as NSNP-DW. In NSNP-DW systems, related to several factors, such as the dendrites’ length, the spikes emitted from the same firing neuron reach different postsynaptic neurons at different times. So delays on synapses are introduced into the NSNP systems. Also, the number of dendrites among a firing neuron and its postsynaptic neurons is different. Although the spikes emitted from one neuron are the same, postsynaptic neurons with different amounts of dendrites to the firing neuron receive different amounts of spikes. Therefore, weights on synapses are added to the NSNP systems, and weights are treated as multiplicative constants by which the number of spikes varies when transiting across synapses. Both of these elements play a role in enhancing the computing power of the system.

Different from NSNP systems, the main novelties of this work are as follows:(i)We propose a novel P system called universal nonlinear spiking neural P systems with delays and weights on synapses closer to biological neurons.(ii)For the NSNP systems in [39], integer spikes are replaced by nonlinear functions. In the proposed NSNP-DW system, we find more applicable nonlinear functions that are also common in neural networks and machine learning, so as to abstract the complex responses generated by spikes.(iii)We prove the computational power of the new P system; in addition, 47 and 43 neurons are successfully used for simulating function calculation and number generator, respectively.(iv)Computational efficiency is also an important evaluation for a P system, usually considering whether the NP-complete problem can be solved in a feasible time. We prove the computational efficiency of the new P system by solving a typical NP-complete problem: Subset Sum problem in polynomial time. It is an attraction of P system solving computationally hard problems so quickly. This makes the NSNP-DW system another powerful tool to solve the NP-complete problem.

The remaining contributions of the paper are reflected as follows: Section 2 briefly shares the relevant content of register machine. Section 3 proposes NSNP-DW systems and gives two intuitive examples. The computing power of NSNP-DW systems equivalent to the Turing machine is proven in Section 4. Section 5 verifies the universality of the NSNP-DW system using fewer neurons. The uniform solution of the Subset Sum using the NSNP-DW system is added in Section 6. Conclusion and follow-up research on the NSNP-DW system are explained in the last section.

2. Prerequisites

The elements of the formal language of the SNP system can be consulted in [11, 26]. Here, we only introduce some symbolic theories used in this work, such as Turing machine construction, execution instructions, and universal Turing machine.

We show that the proposed NSNP-DW system as number generating and accepting device is equivalent to Turing machine. An arbitrary family of Turing computable natural numbers, defined as NRE, is a family of length sets for recursively enumerable languages. NRE can be characterized by a register machine . The structure of the form , where is the number of registers, denotes the instruction tag set, is the starting label, is the halting instruction HALT, and denotes the instruction set. Each instruction in is one of the following three forms:(1) (add 1 to register and then execute nondeterministically the instruction with label or ).(2) (if register is nonzero, then subtract 1 from it, and go to the instruction with label ; otherwise, go to the instruction with label ).(3) (the halting instruction).

By imitating the register machine, the universal NSNP-DW system was verified. When all the registers are empty, the calculation starts from , and the instructions in are continuously executed until halting instruction . The number set generated by the register machine is defined as . is the number set accepted by ; corresponding instruction is deterministic and can be expressed as .

Computable function ( is a natural number) can be calculated by the register machine. If the equation is satisfied, where and are nonnegative integers and is a recursive function, then the register machine is universal. A universal register machine simulated by the NSNP-DW system is shown in Figure 1, consisting of 9 registers, 14 SUB instructions, 10 ADD instructions, and one HALT instruction.

3. NSNP-DW Systems

The materials and methods section should contain sufficient detail so that all procedures can be repeated. It may be divided into headed subsections if several methods are described.

In the following, we provide the definition of the NSNP-DW system and related semantic explanations. An example shows the operation of the system more clearly.

3.1. Definition

The structure of the proposed NSNP-DW system with degree iswhere(1) is the singleton alphabet ( denotes spike);(2) are neurons, in the form of for , where is the initial value of spikes contained in neuron , indicating the initial state of neuron , and is the finite set of spiking rules in the following form:(a)Spiking/firing rules: , where is the firing condition, is a linear or nonlinear function, and is a nonlinear function, and .(b)Forgetting rules: , for a linear or nonlinear function .(3) is a synaptic expression with weight, where , . For any , , , and .(4) represents the delay on synapse , expressed in the form of a time number.(5) and indicate the input and output neurons of the system, respectively.

The NSNP-DW system can be visualized by a directed graph with nodes and arcs, where nodes are neurons and arcs represent synapse connections. In original SNP systems, the number of spikes is described by an integer. In NSNP-DW systems, besides integer, it can also be described by nonlinear functions. For the rules and in the NSNP-DW system, the firing condition has two forms. (1) It is a regular expression like to exactly “cover” the contents of the neuron. If is exactly , then can be omitted. (2) It is a threshold for enabling the rule and exists in the form of a positive real number. In order to distinguish, we write the rules as and , . Both types of rules can be used in this paper. We assume that is the spike value of neuron at step . For firing rules , only when and are satisfied, the rule can be applicable. Intuitively speaking, when the firing rule is met, the neuron is fired, consuming spikes (if the remaining spike can no longer enable the rule, it will disappear in the neuron) and sending out spikes. Forgetting rules , mean that spikes with value are consumed but not generated in the neuron to which it belongs. There will inevitably be multiple rules (greater than or equal to 2) in a neuron, assuming two rules, corresponding to the thresholds of and , respectively. (i) When and both rules can be executed, the maximum threshold strategy is applied. For instance, if , the rule with takes precedence, and the rule with threshold is not used. For this strategy, the forgetting rule is no exception. (ii) When , nondeterministic rule selection strategy is enabled. That is, the rules with and need to be discussed separately.

In addition, for the NSNP-DW system, weights and delays are reflected in synapses. For any , is the weight on the synapse. If the spikes with value are emitted by neuron , neuron will receive spikes. The delay between synapses is represented by , is in the form of a time number. If there is a delay between neurons and , the spiking rules belonging to neuron are enabled at step . The spikes with value are consumed from in step ; neuron will receive the spikes with value from neuron at step . Therefore, the spikes are owned by after moments.

Besides, the consumption function and the production function can be selected from the following common activation functions in neural networks:(1)Tanh function: .(2)Sigmoid function: .(3)RuLU function: (4)Derivative function of RuLU: (5)ELU function: (6)(7)(8)(9)LReLU function: (10)PReLU function: .(11)Softplus function: .(12)Swish function: .

In the NSNP-DW system, neurons are used in parallel, and the use strategy of rules in each neuron is in a sequential manner; that is, only one rule is allowed to be employed nondeterministically in a calculation step.

Assuming neurons, is the number of spikes of the i-th neuron, then the configuration (state) of the system at step can be expressed as , and the initial configuration is . By executing spike rules, configuration to configuration , denoted by , is defined as a transition of system , and the sequence obtained by this transition is defined as a calculation. Each calculation is related to a spike sequence similar to a binary sequence. The sequence is composed of 0 and 1. The output neuron emits a spike to the environment corresponding to 1; otherwise, it corresponds to 0. In this study, the time interval at which the output neuron emits spikes to the environment is used as the calculation result.

For an NSNP-DW system with at most m neurons and at most 2 rules in each neuron, we use to represent all natural number sets generated by the NSNP-DW system and to represent all natural number sets accepted by the NSNP-DW system. When the number of neurons is uncertain, is often replaced by .

3.2. Two Illustrative Examples

Example 1. A simple example of the NSNP-DW system is given in Figure 2, containing three neurons labeled by 1, 2, and 3. The weight between neuron and neuron is 2. The delay between neuron and is , denoted by . It is assumed here that and take (5) and (4) of the above function.
At step , neuron receives two spikes from the environment, its state is , and rule is applied. Neuron consumes two spikes and sends one spike to neurons and each (because ). At step , neuron receives two spikes due to weight 2. At this moment, neuron is not fired because of . At step , neuron receives two spikes, one from neuron and one from neuron (because ). There are two rules in neuron that both meet the fired conditions. Subject to the maximum threshold strategy, rule is used and emits one spike to the environment (for ). This example is complete and the results of each step are presented in Table 1.

Example 2. Let and take (2) and (3) of the above functions as the consumption and generation functions. We define as the system for generating natural numbers; as shown in Figure 3, each neuron initially has a spike.
In the first step, neuron uses rule to emit spike to the environment. At the same time, neurons and also fire by applying rule , sending a spike to each other (because of weight 2), and neuron and neuron send one (because of weight 2) and spike to neuron , respectively. In the next step, neurons and continue their initial actions any number of times, and the spikes in neuron are always removed. Once neuron executes rule at step , neuron stops emitting spike, and neuron sends the last spike to neuron . At step , neuron has a spike, using rule to send spike to the environment again. In this way, the time interval of transmitting spike to the environment, that is, the natural number generated by the system .

4. Computational Power

In the nonlinear spike rule of the NSNP-DW system, we choose two of the aforementioned functions (representing consumption or generation function) to verify the Turing universality of the system and its computing power. The following functions and are considered:

Thus, the state of neuron can be recorded by a nonlinear equation:where and are the states of neuron at step and , respectively, represents the consumption value, and is the reception value.

In this part, we are committed to showing Turing universal NSNP-DW system as number generating device and accepting device. Given that the register machine can generate and accept any NRE, the NSNP-DW system is proved to be universal through simulating the number generated by . In order to facilitate understanding, we assume that number in register represents spikes in neuron . Neurons , , and receive two spikes, and the corresponding instructions are activated.

4.1. NSNP-DW Systems as Number Generating Device

Theorem 1. .

Proof. is beyond doubt based on the Turing-Church thesis; only needs to be proved. In the number generating mode, is the needed register machine, and the number generating device includes ADD, SUB, and FIN modules. generates the number in the following way: initially, the number of all registers is empty, and the simulation starts from instruction , continues the process with the required label instructions, and stops at instruction . According to the instructions, the number stored in the first register is calculated by . In ADD and SUB modules, neuron receives two spikes and runs according to instruction (OP is ADD or SUB operation). Neuron or receives two spikes indefinitely, and corresponding instruction or is activated. In the FIN module, neuron sends spikes outside twice at intervals.
ADD module (shown in Figure 4)—simulating an ADD instruction .
Neuron will receive two spikes from environment. After running the ADD mode, spikes will be sent to neuron or indefinitely to simulate instruction or . When two spikes are in neuron , the corresponding register is increased by 1.
In detail, neuron fires at step , and rule is executed, sending one spike to neurons , , , and , respectively. The next moment, neuron receives one spike. Since the same thresholds of rule and rule , the two rules are executed indefinitely:(i)At step , if rule is used, one spike will be sent to neurons and , respectively. Next step, both neurons and contain two spikes, one from neuron and one from neuron . At step , neuron fires by using rule , so that two spikes are removed. Rule in neuron is applied simultaneously, consuming two spikes and emitting one to neurons and . At the next step, neuron becomes active by executing rule , emitting one spike to neuron and each. In this way, the second spike is received by , which will aggrandize the value of register by 1. Neuron receives a total of two spikes successively, and then instruction starts to be simulated.(ii)At step , neuron fires by using rule , which causes a spike to be removed. At step , neurons and each receive one spike, and soon this no longer exists in neuron because of . The one received in neuron is sent to neurons and through rule . Then, the next moment, neuron transmits one, which is received by and . So neuron and have received two spikes at step , respectively, indicating that the register is increased by 1, and is activated.Therefore, simulating instruction is displayed correctly. Considering two different rules, Table 2 shows the number of spikes in all neurons at each moment.
SUB Module (shown in Figure 5)—simulating a SUB instruction .
Two spikes are received by neuron . If register is not empty, then two spikes are sent to neuron , and the corresponding instruction is executed. If the value in the register is zero, then two spikes are sent to neuron , and the instruction with label is executed. The detailed simulation process is as follows.
Neuron fires at step , and rule is applied to emit one spike. At step , neuron , , and each receive one spike from neuron . Next, there will be two cases according to the value of spike in neuron :(i)At step , if spikes are contained by (the value of the corresponding register is ), then rule is applicable. Next step, the neuron receives three spikes and fires, one from neuron and two from neuron . Based on the maximum threshold strategy, rule is used to consume these three spikes. At the same step, neuron receives the second spike, and then instruction is simulated by system .(ii)At step , if neuron only contains one spike (the value of the corresponding register is ), then the spike is removed by rule . At step , a spike from neuron is in neuron . The firing of neuron by rule causes neuron to add a spike. At the next step, neuron receives a total of two spikes, and then instruction is simulated by system , but not .So SUB module simulates instruction correctly. The simulated numerical changes are presented in Table 3.
(3) FIN module (shown in Figure 6) - outputting the result of computation.
At step , neuron fires by running rule , transmitting a spike to . Originally, neuron contains spikes, and after receiving one spike, the rule can be used first because of the maximum threshold strategy. Then neuron and neuron each have a spike from neuron . The first spike is sent by output neuron to the environment through at step . Besides, neuron receives one spike from neuron and neuron , respectively, causing them to be forgotten by . Since two spikes are consumed in neuron , a spike is continuously given and . As a result, both spikes in neuron from to are forgotten by . Until step , only one spike is kept in , and then the generated one is emitted by . At step , the neuron still accepts two spikes but is forgotten instantly. At step , the last spike is received by neuron from and sent to the environment through rule . The time interval between spikes emitted to the environment is the number calculated by the system . In short, the numerical result computed through the system is . Take the generated number as an example, and the simulated numerical changes of output module are reflected in Table 4.
Through the above discussion, we can see that the system accurately simulates the register machine , so the theorem is reasonable.

4.2. NSNP-DW Systems as Number Accepting Device

Theorem 2. .

Proof. In the number accepting mode, the number in the first register is (others are empty), and then the calculation starts from ; when the calculation stops, the number is accepted. Similar to Theorem 1, we only need to verify . The constructed NSNP-DW system as number accepting device includes an INPUT module, a deterministic ADD module, and a SUB module. Figure 7 shows the INPUT module.
Suppose that the first spike is received by neuron at step ; the firing of gives a spike to neurons , , and through rule . Then, neuron fires and outputs one spike to neuron , while neurons and fire by using rule . At step , neurons and send one spike to each other, and especially neuron receives two spikes from neuron , until neuron receives the second spike. Thus, neuron receives spikes from step to .
At step , neuron obtains a spike again and reacts using rule , sending one spike to neurons , , and again. At step , and each possess two spikes, and rule is applicable so that spikes are eliminated. Simultaneously, neuron fires for the second step, executing rule to give neuron a spike, whereupon instruction is activated.
Figure 8 displays the simulating of deterministic ADD instruction . Neuron accepts two spikes, consuming them and sending two spikes to neuron through rule ; instruction is simulated. Neuron contains two spikes, indicating that the register is increased by 1. The SUB module of accepting mode is the same as in Figure 4.
In short, holds.

5. Small Universal Computing Devices

5.1. Small Universal NSNP-DW Systems as Function Computing Device

Theorem 3. There is a small universal NSNP-DW system possessing 47 neurons for computing functions.

Proof. For simulation of the register machine , the designed NSNP-DW system includes INPUT, ADD, SUB, and OUTPUT modules. The general design is visualized in Figure 9. Still the same as originally assumed, the value in register corresponds to spikes in neuron . Assume that all neurons are empty in the initial configuration. Figure 10 is the designed INPUT module. is the input neuron, reading a spike train . Finally, and spikes are contained by neurons and , respectively.
As before, a time step or step represents the execution time of one rule. We still use this notion to mark the moment the rule is executed. At step , if the first spike from the environment is received by neuron , rule is applicable, sending one spike to neurons , , , , , and , respectively. At step , neurons , , , and will not receive spikes due to one moment of delay; neuron and neuron each receive one spike but do not fire. At the next step, the neurons , , , and receive one spike from neuron , but neurons and do not fire. Neurons and fire by employing rule , sending one spike to each other and both sending one spike to , so neuron owns two spikes at . In this way, neuron continues to receive two spikes, until step , has a total of spikes.
At step , here actually , and neuron fires a second time and applies rule to send one spike to each of neurons , , , , , and . Neurons and each have two spikes at step . For one delay, neurons , , , and receive one spike from neuron at step . At this step, neurons and each have two spikes, and executing rule causes two spikes to be removed. On the contrary, neurons and each have two spikes and stay active. Two spikes are sent to each other by neurons and , and two are given to by neuron , so that neuron contains two spikes at step . Neuron accepts two spikes from neuron continuously until step . At step , neuron retains spikes in total.
At step , here actually , neuron fires a third time, one spike is consumed, and one is sent to neurons , , , , , and , respectively, by rule . At step , neuron with three spikes fires, and the rule is used to consume three spikes and send two spikes to neuron . When the neuron receives two spikes, the introduction is simulated. Neuron also fires at step and sends one spike to neurons and through rule . At step , neuron receives spikes from neuron and neuron ; forgetting rule is applied to remove two spikes. The same is true for neuron . At the same step, neurons and each receive the third spike from neuron , so they have three spikes and remain inactive. The forgetting rule is used to remove the three spikes.
In order to reflect the rationality of INPUT module, assuming and , the change in the number of spikes at each step can be clearly seen in Table 5.
In addition, the ADD and SUB modules are the same as in Figures 8 and 5. The design and simulation process of OUTPUT module is the same as Figure 6, except that the register 1 is replaced by register 8 (shown in Figure 11). This is because when a small universal NSNP-DW system is used as function computing device, after each instruction simulation, the final register 8 contains numbers (the neuron contains spikes). The result is output through the OUTPUT module.
From the discussion above, the NSNP-DW system as a function computing device can accurately simulate the register machine by using 57 neurons: (i) 7 neurons in the INPUT module, (ii) 2 neurons in the OUTPUT module, (iii) 1 neuron in each SUB instruction and 14 in total, (iv) 9 neurons in 9 registers, and (v) 25 neurons for 25 instructions.
The register machine is simulated by the NSNP-DW system, and each instruction on is regarded as a neuron. However, some instructions are continuous. By exploring the relationship between the instructions of , correspondingly constituted modules can be combined, instructions are omitted, and the use of neurons is reduced by the way. A detailed introduction to the initial register machine and its instructions can be found in [40]. For the NSNP-DW system, module combination is mainly divided into three categories: module ADD-ADD, module ADD-SUB, and module SUB-ADD (includes modules SUB-ADD-1 and SUB-ADD-2). The working process of module ADD-SUB and module SUB-ADD is closely related to that of module SUB. The working principle is expressed by the structure diagram. Readers interested in a description of the principle can refer to [11, 26, 41].

5.1.1. Module ADD-ADD

These are two deterministic ADD instructions. The simulation of each instruction is the same as in Figure 8. The module design is shown in Figure 12 before the instruction is omitted.

Obviously, this is a sequence of two consecutive ADD instructions connected by ; instruction can be omitted through the following module ADD-ADD (shown in Figure 13).

5.1.2. Module ADD-SUB

These are two consecutive pairs of ADD-SUB instructions connected by and , respectively; we can combine instructions and into one instruction , which saves instruction . Instructions and are combined into one instruction , saving instruction .

Taking the omission of as an example, neuron sends spikes to and , and then neuron performs the simulation of instruction . Here, neuron can be omitted, and neuron replaces to directly simulate the SUB instruction. It can be seen from Figure 14 that this is possible.

5.1.3. Module SUB-ADD

For introduction , when the value , it is subtracted by 1, and the instruction is executed; otherwise, the labeled is activated. Therefore, considering that there are two forms of consecutive SUB-ADD instructions, module SUB-ADD is divided into modules SUB-ADD-1 and SUB-ADD-2.

Module SUB-ADD-1:

This is the case when is activated and is reserved. We can combine instructions and into one instruction , causing instruction to be omitted (see Figure 15).

Module SUB-ADD-2:

This is the case when is activated and is reserved. There are six pairs of instructions that can be combined in pairs. It is found through observation that the following ADD instruction happens to be the first execution position of the previous SUB instruction. Then each pair of SUB-ADD instruction combinations can be updated to and. When the register , they can share one neuron . In this way, six neurons in total of , , , , , and can be saved. The visualization can be illustrated in Figure 16.

Through the above instruction combination (called “code optimization” in [41]), 10 neurons , , , , , , , , , and can be omitted. In the end, 47 neurons are enough to complete a small universal NSNP-DW system for computing functions:(i)Nine neurons for 9 registers.(ii)15 neurons in remaining 15 instruction labels (ten labels are saved).(iii)Seven neurons in the module INPUT.(iv)14 neurons for a total of 14 SUB instructions.(v)Two neurons in the module OUTPUT.

5.2. Small Universal NSNP-DW Systems as Number Generator

In the simulation of number generator, the INPUT module can be combined with the OUTPUT module, found in Figure 17. In the constructed INPUT-OUTPUT module, instruction is removed, and register 8 is no longer needed. The spike train that the input neuron gets from the environment is , neuron is loaded with spikes, and neuron receives spikes nondeterministically. Neuron fires twice successively, and the time interval is the numerical result generated.

The simulation of NSNP-DW systems as number generator shares 43 neurons, and the specific details will not be repeated:(i)Eight neurons for 8 registers (no register 8).(ii)14 neurons in the remaining 14 instruction labels ( and ten labels are saved).(iii)Seven neurons in the module INPUT-OUTPUT.(iv)14 neurons for a total of 14 SUB instructions.

Theorem 4. There is a small universal NSNP-DW system possessing 43 neurons for number generator.

The specific simulation will not be introduced in detail. We use an example to analyze the feasibility of number generator simulation; assuming and , the results of each step are reflected in Table 6.

5.3. Discussion

Theorem 3 gives the Turing universal NSNP-DW system with fewer neurons as a function computing device. In order to more intuitively verify the computing power of the NSNP-DW system, Table 7 compares the number of computing units for the variant and its related systems. According to Table 7, we observe that NSNP systems, SNP systems, SNP-DS systems, and recurrent neural networks use 117, 67, 56, and 886 neurons, respectively, to accomplish Turing universality for computing function, and NSNP-DW systems only require 47 neurons. Besides, according to Table 8, we have observed that 121 neurons are reduced for simulating number generator. In short, NSNP-DW systems are better than these systems in the use of neurons, and the computational power of the NSNP system has been effectively improved.

6. A Uniform Solution to Subset Sum Problem

The Subset Sum problem is one of the typical NP-complete problems proposed in [43]. We use the NSNP-DW system to solve the uniform solution of the Subset Sum in a nondeterministic operation mode.

The Swish function and the LReLU function are considered for the spike consumption function and the generating function in the problem.

Problem. NAME: Subset Sum.

INSTANCE: a set of positive integers and a positive integer .

QUESTION: is there a subset that satisfies ?

Theorem 5. The uniform solution of Subset Sum problem can be solved by NSNP-DW systems.

Figure 18 depicts the architecture of the NSNP-DW system to solve the Subset Sum in a uniform way. The complexity of the uniform solution is that the system only “recognizes” the number when solving the problem. The instance of the problem needs to be introduced into the system. is the input neuron of the system. The positive integer in the problem is encoded into . At the beginning of the calculation, spikes enter neuron , spikes enter neuron , ..., and spikes enter neuron . In the initial configuration (state) of the system, except that neuron contains four spikes, all other neurons are empty.

In the first calculation, both rules in neuron are likely to be employed first (because they have the same threshold). The indeterminate use of these two rules indicates that the system solves this Subset Sum problem in a nondeterministic way of operation, and it also corresponds to whether the integer is in the subset . In the following, we carry out a complete derivation.

Proof. Neuron initially has four spikes. At step one, if rule is selected for use, then neuron will consume spikes and send three spikes to neurons and , respectively (because ). At step 2, neuron forgets three spikes by the rule . At the same time, neuron uses rule to become active and sends two spikes to neurons (because ). At the end of this step, neurons and maintain their original state. At step 3, neuron has a total of spikes and fires. The rule is used first, sending three spikes to neurons and , respectively. This process will continue for steps until the rule cannot be activated. Neurons and still cannot be active after receiving spikes. When there are only two spikes left in neuron , rule is used, and finally, two spikes are sent to neuron and , respectively. After possessing spikes, neuron fires and emits a spike to neurons and , respectively. In the next step, the neuron still cannot fire because it takes spikes to be activated. Conversely, neuron that has received spikes is activated by rule and sends one spike to neuron . In this way, the output neuron can fire and emit spikes to the environment.
If initially neuron uses the rule , spikes are consumed and two spikes are sent to neurons and , respectively (because ). In the second step, the two spikes received by neuron are removed by rule . Neuron uses the rule and sends a spike to neuron (because ). Before the neuron fires, neuron remains inactive. After neuron receives a total of spikes from , it fires and sends a spike to neuron . In the next step, the neuron contains only one spike, so it does not fire, nor does it emit spikes into the environment.
At this point, we have ended the process of solving the uniform solution of the Subset Sum. Obviously, the system requires a total of neurons. After stopping operation, if there are exactly spikes in the environment, the answer to the problem is “yes,” which means that there is a subset that makes hold. Otherwise, it is “no.” This is enough to show that the NSNP-DW system for solving Subset Sum problem is complete.
In the calculation process, the calculation between neurons is parallel, and the rules in each neuron are calculated sequentially. Through computing and reasoning, it can be known that neurons , , , , and fire once, respectively, and neuron fires times. After all other neurons stop computing, the neuron can fire at most times. Therefore, the system needs steps to complete the calculation. In addition, we choose nonlinear functions as the spike consumption function and generation function, which is closer to reality and reflects the significance of nonlinear functions in the NSNP-DW system.

7. Conclusions and Further Work

The nonlinear spiking neural P (NSNP) systems are variants of spiking neural P (SNP) systems. Nonlinear functions are used flexibly in NSNP systems. We focus on the computing power of NSNP systems in this work. Two mechanisms of delays and weights are introduced, and nonlinear spiking neural P systems with delays and weights (NSNP-DW) are proposed. An explicit example is given to visually demonstrate the operation of the NSNP-DW system. Through a series of simulation computing, 47 and 43 neurons are sufficient for constructing small universal NSNP-DW systems as function computing device and number generator. Compared with the NSNP systems [39], the NSNP-DW system decreases 70 neurons and 121 neurons, respectively, as function computing device and number generator. Finally, the uniform solution of the Subset Sum problem is solved efficiently by using the NSNP-DW system.

For further work, the NSNP-DW system, as a distributed parallel computing model, can be combined with clustering algorithms to improve algorithm efficiency. As far as the computational power of the NSNP-DW system is concerned, we are committed to proving that the 47 and 43 neurons used by the simulating function calculation and the number generator, respectively, are the least in total. In particular, the number of spikes breaks through the integer limit and has been replaced by nonlinear functions in NSNP systems. In view of this breakthrough, we can try to link NSNP-DW systems with the neural network to expand more interesting research.

Data Availability

No datasets were used in this article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research project was supported by the National Natural Science Foundation of China (61876101, 61802234, and 61806114), Social Science Fund Project of Shandong Province, China (16BGLJ06 and 11CGLJ22), Natural Science Fund Project of Shandong Province, China (ZR2019QF007), Postdoctoral Project, China (2017M612339 and 2018M642695), Humanities and Social Sciences Youth Fund of the Ministry of Education, China (19YJCZH244), and Postdoctoral Special Funding Project, China (2019T120607).