<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0">
<channel>
	<title>SENTIPULSE AI</title>
	<language>en_US</language>
	<generator>PRN Asia</generator>
	<description><![CDATA[we tell your story to the world!]]></description>
		<item>
		<title>SentiPulse Launches SentiCat, Positioning Agents as the Foundation for Digital Humans</title>
		<author></author>
		<pubDate>2026-04-29 21:04:00</pubDate>
		<description><![CDATA[Agents Serve as a Capability Layer, with SentiCat Marking an Early Step in 
SentiPulse's Digital Human Roadmap

JINAN, China, April 29, 2026 /PRNewswire/ -- As artificial intelligence (AI) 
evolves from conversational models to action-capable agents, a longer-term 
trajectory is beginning to take shape: AI is moving from on-demand tools toward 
systems designed for sustained interaction. In response to this shift, 
SentiPulse has introduced SentiCat, an agent-based system positioned as an 
early step toward its broader digital human system. When launched on desktop, 
users encounter more than a chat interface—they are greeted by SUSU, a 3D AI 
persona with a persistent identity and personality, designed to support 
ongoing, context-aware interaction over time.

"We do not view agents as the end state—they are a step along the path toward 
digital humans," said Grant Han, CEO of SentiPulse. "Agents provide the ability 
to execute tasks and take action, serving as the operational layer that enables 
digital humans to perform work. At the same time, we are developing 
capabilities for emotional awareness and expression in digital human systems."

 <https://mma.prnasia.com/media2/2969023/image.html>
SentiCat Product Interface

In practical use, users no longer interact with a passive tool that waits for 
instructions. Instead, they engage with an always-available system designed for 
ongoing interaction over time. During conversations, the system can invoke 
underlying agent capabilities to execute multi-step tasks, including retrieving 
information, processing files, and completing automated workflows within a 
single interaction.

This design reflects a shift in the user-system relationship from 
single-session interactions to persistent engagement. According to SentiPulse, 
this transition affects not only user experience but also system performance. 
"If users only engage the system when a task arises, it becomes difficult for 
an agent to build meaningful context," said Grant Han. "The duration of 
interaction directly shapes execution quality."

From a technical standpoint, SentiPulse positions the AI persona as the 
interaction and presence layer, responsible for user-facing engagement and 
continuity, while agents provide the underlying execution and behavioral 
capabilities. Together, they enable the system to complete tasks while 
incrementally building a more detailed understanding of the user over time. In 
SentiPulse's framework, AI personas are early forms of digital humans, unifying 
personality, memory, and action.

In the agent space, as execution capabilities become more standardized, 
competition is shifting from "what the system can do" to "whether it is used 
continuously over time." SentiPulse frames this as a flywheel: longer 
engagement leads to richer context, improved system understanding, increased 
efficiency, stronger user reliance, and continued usage.

Grant Han noted that this approach is becoming viable due to the parallel 
maturation of several technologies, including large-scale models, long-term 
memory systems, and agent frameworks. SentiPulse has also built a foundation in 
the 3D digital human domain, with capabilities spanning character design, 
motion generation frameworks, interaction datasets, and dialogue decision 
models.

Within this context, the role of agents is also evolving. Rather than 
functioning as standalone applications, they are increasingly positioned as a 
foundational capability layer supporting more complex interactive systems. For 
SentiPulse, however, this layer is not the final destination. The ultimate goal 
is a digital human system capable of sustained interaction with users in a 
continuous operating context.

]]></description>
		<detail><![CDATA[<p class="prntac"><b><i>Agents Serve as a Capability Layer, with SentiCat Marking an Early Step in SentiPulse's Digital Human Roadmap</i></b></p> 
<p><span class="legendSpanClass">JINAN, China</span>, <span class="legendSpanClass">April 29, 2026</span> /PRNewswire/ -- As artificial intelligence (AI) evolves from conversational models to action-capable agents, a longer-term trajectory is beginning to take shape: AI is moving from on-demand tools toward systems designed for sustained interaction. In response to this shift, SentiPulse has introduced SentiCat, an agent-based system positioned as an early step toward its broader digital human system. When launched on desktop, users encounter more than a chat interface—they are greeted by SUSU, a 3D AI persona with a persistent identity and personality, designed to support ongoing, context-aware interaction over time.</p> 
<p>&quot;We do not view agents as the end state—they are a step along the path toward digital humans,&quot; said Grant Han, CEO of SentiPulse. &quot;Agents provide the ability to execute tasks and take action, serving as the operational layer that enables digital humans to perform work. At the same time, we are developing capabilities for emotional awareness and expression in digital human systems.&quot;</p> 
<div class="PRN_ImbeddedAssetReference" id="DivAssetPlaceHolder5310"> 
 <p style="TEXT-ALIGN: center; WIDTH: 100%"><a href="https://mma.prnasia.com/media2/2969023/image.html" target="_blank" rel="nofollow" style="color: #0000FF"><img src="https://mma.prnasia.com/media2/2969023/image.jpg?p=medium600" title="SentiCat Product Interface" alt="SentiCat Product Interface" /></a><br /><span>SentiCat Product Interface</span></p> 
</div> 
<p>In practical use, users no longer interact with a passive tool that waits for instructions. Instead, they engage with an always-available system designed for ongoing interaction over time. During conversations, the system can invoke underlying agent capabilities to execute multi-step tasks, including retrieving information, processing files, and completing automated workflows within a single interaction.</p> 
<p>This design reflects a shift in the user-system relationship from single-session interactions to persistent engagement. According to SentiPulse, this transition affects not only user experience but also system performance. &quot;If users only engage the system when a task arises, it becomes difficult for an agent to build meaningful context,&quot; said Grant Han. &quot;The duration of interaction directly shapes execution quality.&quot;</p> 
<p>From a technical standpoint, SentiPulse positions the AI persona as the interaction and presence layer, responsible for user-facing engagement and continuity, while agents provide the underlying execution and behavioral capabilities. Together, they enable the system to complete tasks while incrementally building a more detailed understanding of the user over time. In SentiPulse's framework, AI personas are early forms of digital humans, unifying personality, memory, and action.</p> 
<p>In the agent space, as execution capabilities become more standardized, competition is shifting from &quot;what the system can do&quot; to &quot;whether it is used continuously over time.&quot; SentiPulse frames this as a flywheel: longer engagement leads to richer context, improved system understanding, increased efficiency, stronger user reliance, and continued usage.</p> 
<p>Grant Han noted that this approach is becoming viable due to the parallel maturation of several technologies, including large-scale models, long-term memory systems, and agent frameworks. SentiPulse has also built a foundation in the 3D digital human domain, with capabilities spanning character design, motion generation frameworks, interaction datasets, and dialogue decision models.</p> 
<p>Within this context, the role of agents is also evolving. Rather than functioning as standalone applications, they are increasingly positioned as a foundational capability layer supporting more complex interactive systems. For SentiPulse, however, this layer is not the final destination. The ultimate goal is a digital human system capable of sustained interaction with users in a continuous operating context.</p> 
<div class="PRN_ImbeddedAssetReference" id="DivAssetPlaceHolder0"> 
</div>]]></detail>
		<source><![CDATA[SentiPulse]]></source>
	</item>
		<item>
		<title>SentiAvatar, the First Interactive 3D Digital Human Framework from SentiPulse and GSAI, Now Open Source</title>
		<author></author>
		<pubDate>2026-04-09 16:33:00</pubDate>
		<description><![CDATA[Release includes motion dataset, foundation model, and streaming architecture 
designed to align speech, gesture, and expression in live conversation

JINAN, China, April 9, 2026 /PRNewswire/ -- SentiPulse, a leading AI company 
focused on emotional foundation models and user experience innovation, in 
collaboration with a PhD team from the Gaoling School of Artificial 
Intelligence (GSAI), Renmin University of China (RUC), announced the 
open-source release of SentiAvatar, a framework for building expressive 
interactive 3D digital humans.

SentiAvatar powers SUSU, a real-time 3D avatar capable of conversation, 
expressive motion, and emotional delivery. The release includes the full 
SentiAvatar framework, the SUSU character model, and the SuSuInterActs 
high-quality motion dataset, all now freely available on GitHub.

 <https://mma.prnasia.com/media2/2953007/image1.html>
High-quality 3D human motions and expressions generated by SentiAvatar are 
presented by the digital human SUSU.

In the rapidly advancing field of 3D digital humans, one long-overlooked yet 
critical issue is becoming increasingly clear: unnatural expression. The 
avatar's mouth moves and its hands gesture, but the actions don't match the 
meaning, and the face looks stiff. The combination quickly triggers the uncanny 
valley effect.

The issue is simple: human communication has never relied on spoken language 
alone. A shrug conveys helplessness, a nod signals agreement, and a slight 
raise of the eyebrow hints at doubt. These nonverbal signals—gestures, posture, 
facial expressions—are the soul of real conversation.

Yet getting a 3D digital human to naturally "gesture and move as they speak" 
in real conversation has proven far harder than expected. This is not purely an 
engineering challenge—it involves three persistent challenges that have never 
been solved: the lack of high-quality data, the unmet need to understand 
composite semantic actions, and the difficulty of syncing motion with the 
rhythm of speech.

SentiAvatar: A Framework for 3D Digital Human Motion Generation

The SuSuInterActs Dataset 

SentiPulse built SuSuInterActs around a single character: SUSU (age 22, warm 
and lively, emotionally rich). The dataset contains 21,000 clips and 37 hours 
of multimodal conversational data, including synchronized speech, annotated 
behavioral text, full-body motion, and facial expressions—helping address the 
lack of high-quality Chinese-language datasets in the field.

Motion Foundation Model: Pre-trained on 200K+ Sequences

As conversational motion data is inherently limited by dialogue scenarios, 
the team pre-trained a proprietary Motion Foundation Model on more than 200,000 
heterogeneous motion sequences (approximately 676 hours), learning general 
motion patterns that go far beyond dialogue-specific actions.

Core Architecture: Plan-Then-Infill 

SentiAvatar introduces a novel dual-channel parallel architecture - 
Plan-Then-Infill. It separates body motion from facial expression, first 
planning what action to perform and then infilling in how to execute it frame 
by frame.

State-of-the-Art Real-Time Performance 

SentiAvatar achieves new SOTA results on both the SuSuInterActs and BEATv2 
datasets. Compared to mainstream models: MoMask lacks speech input and results 
in rhythm that feels static and disconnected; EMAGE syncs with audio but 
ignores semantic intent; AT2M-GPT can misinterpret the meaning of actions; and 
HunYuan-Motion can produce unstable outputs, often with distorted or unnatural 
movements. SentiAvatar delivers semantically accurate motion that is tightly 
aligned with audio.

The framework generates six-second motion sequences within 0.3 seconds and 
supports infinite-turn streaming interaction. This means digital humans can 
continuously generate coherent gestures and expressions during live 
conversation—without waiting for a full sentence to finish before processing, 
directly addressing one of the core causes of unnatural expression.

Open Source and Beyond: From Digital Humans to Digital Life

The SentiPulse team invites research organizations and individual developers 
worldwide to push the boundaries of 3D motion generation. Whether you want to 
build your own 3D companion from scratch or extend SUSU with richer expressive 
capabilities for games, film production, robotics, or beyond—the open-source 
framework is ready.

GitHub: https://sentiavatar.github.io/ <https://sentiavatar.github.io/> 
Technical report: https://arxiv.org/abs/2604.02908 
<https://arxiv.org/abs/2604.02908>

About SentiPulse
SentiPulse, founded in September 2025, is an AI company focused on emotional 
foundation models and user experience innovation. The company is dedicated to 
deepening the relationship between humans and AI through advanced 
technology—not simply as tools, but as a bridge to more natural and expressive 
interaction. The team consists of top researchers from leading Chinese 
universities and cross-disciplinary experts with deep expertise in multimodal 
models and 3D digital humans.

 

]]></description>
		<detail><![CDATA[<p><i>Release includes motion dataset, foundation model, and streaming architecture designed to align speech, gesture, and expression in live conversation</i></p> 
<p><span class="legendSpanClass">JINAN, China</span>, <span class="legendSpanClass">April 9, 2026</span> /PRNewswire/ -- SentiPulse, a leading AI company focused on emotional foundation models and user experience innovation, in collaboration with a PhD team from the Gaoling School of Artificial Intelligence (GSAI), Renmin University of China (RUC), announced the open-source release of SentiAvatar, a framework for building expressive interactive 3D digital humans.</p> 
<p>SentiAvatar powers SUSU, a real-time 3D avatar capable of conversation, expressive motion, and emotional delivery. The release includes the full SentiAvatar framework, the SUSU character model, and the SuSuInterActs high-quality motion dataset, all now freely available on GitHub.</p> 
<div class="PRN_ImbeddedAssetReference" id="DivAssetPlaceHolder1608"> 
 <p style="TEXT-ALIGN: center; WIDTH: 100%"><a href="https://mma.prnasia.com/media2/2953007/image1.html" target="_blank" rel="nofollow" style="color: #0000FF"><img src="https://mma.prnasia.com/media2/2953007/image1.jpg?p=medium600" title="High-quality 3D human motions and expressions generated by SentiAvatar are presented by the digital human SUSU." alt="High-quality 3D human motions and expressions generated by SentiAvatar are presented by the digital human SUSU." /></a><br /><span>High-quality 3D human motions and expressions generated by SentiAvatar are presented by the digital human SUSU.</span></p> 
</div> 
<p>In the rapidly advancing field of 3D digital humans, one long-overlooked yet critical issue is becoming increasingly clear: unnatural expression<b>.</b> The avatar's mouth moves and its hands gesture, but the actions don't match the meaning, and the face looks stiff. The combination quickly triggers the uncanny valley effect.</p> 
<p>The issue is simple: human communication has never relied on spoken language alone. A shrug conveys helplessness, a nod signals agreement, and a slight raise of the eyebrow hints at doubt. These nonverbal signals—gestures, posture, facial expressions—are the soul of real conversation.</p> 
<p>Yet getting a 3D digital human to naturally &quot;gesture and move as they speak&quot; in real conversation has proven far harder than expected. This is not purely an engineering challenge—it involves three persistent challenges that have never been solved: the lack of high-quality data, the unmet need to understand composite semantic actions, and the difficulty of syncing motion with the rhythm of speech.</p> 
<p><b>SentiAvatar</b><b>: A Framework for 3D Digital Human Motion Generation</b></p> 
<p><b>The SuSuInterActs Dataset</b>&nbsp;</p> 
<p>SentiPulse built SuSuInterActs around a single character: SUSU (age 22, warm and lively, emotionally rich). The dataset contains 21,000 clips and 37 hours of multimodal conversational data, including synchronized speech, annotated behavioral text, full-body motion, and facial expressions—helping address the lack of high-quality Chinese-language datasets in the field.</p> 
<p><b>Motion Foundation Model: Pre-trained on 200K+ Sequences</b></p> 
<p>As conversational motion data is inherently limited by dialogue scenarios, the team pre-trained a proprietary Motion Foundation Model on more than 200,000 heterogeneous motion sequences (approximately 676 hours), learning general motion patterns that go far beyond dialogue-specific actions.</p> 
<p><b>Core Architecture: Plan-Then-Infill</b>&nbsp;</p> 
<p>SentiAvatar introduces a novel dual-channel parallel architecture - Plan-Then-Infill. It separates body motion from facial expression, first planning what action to perform and then infilling in how to execute it frame by frame.</p> 
<p><b>State-of-the-Art Real-Time Performance</b>&nbsp;</p> 
<p>SentiAvatar achieves new SOTA results on both the SuSuInterActs and BEATv2 datasets. Compared to mainstream models: MoMask lacks speech input and results in rhythm that feels static and disconnected; EMAGE syncs with audio but ignores semantic intent; AT2M-GPT can misinterpret the meaning of actions; and HunYuan-Motion can produce unstable outputs, often with distorted or unnatural movements. SentiAvatar delivers semantically accurate motion that is tightly aligned with audio.</p> 
<p>The framework generates six-second motion sequences within 0.3 seconds and supports infinite-turn streaming interaction. This means digital humans can continuously generate coherent gestures and expressions during live conversation—without waiting for a full sentence to finish before processing, directly addressing one of the core causes of unnatural expression.</p> 
<p><b>Open Source and Beyond: From Digital Humans to Digital Life</b></p> 
<p>The SentiPulse team invites research organizations and individual developers worldwide to push the boundaries of 3D motion generation. Whether you want to build your own 3D companion from scratch or extend SUSU with richer expressive capabilities for games, film production, robotics, or beyond—the open-source framework is ready.</p> 
<p>GitHub: <a href="https://sentiavatar.github.io/" target="_blank" rel="nofollow" style="color: #0000FF">https://sentiavatar.github.io/</a> <br />Technical report: <a href="https://arxiv.org/abs/2604.02908" target="_blank" rel="nofollow" style="color: #0000FF">https://arxiv.org/abs/2604.02908</a></p> 
<p><b>About SentiPulse<br /></b>SentiPulse, founded in September 2025, is an AI company focused on emotional foundation models and user experience innovation. The company is dedicated to deepening the relationship between humans and AI through advanced technology—not simply as tools, but as a bridge to more natural and expressive interaction. The team consists of top researchers from leading Chinese universities and cross-disciplinary experts with deep expertise in multimodal models and 3D digital humans.</p> 
<p>&nbsp;</p> 
<div class="PRN_ImbeddedAssetReference" id="DivAssetPlaceHolder0"> 
</div>]]></detail>
		<source><![CDATA[SentiPulse]]></source>
	</item>
	
</channel>
</rss>