|
|
|||
|
||||
OverviewFull Product DetailsAuthor: Xian-Ling Mao , Zhaochun Ren , Muyun YangPublisher: Springer Verlag, Singapore Imprint: Springer Verlag, Singapore ISBN: 9789819533428ISBN 10: 9819533422 Pages: 609 Publication Date: 23 November 2025 Audience: Professional and scholarly , College/higher education , Professional & Vocational , Postgraduate, Research & Scholarly Format: Paperback Publisher's Status: Active Availability: Not yet available This item is yet to be released. You can pre-order this item and we will dispatch it to you upon its release. Table of Contents.- Information Extraction and Knowledge Graph. .- Progressive Training of Transformer for Knowledge Graph Completion Tasks. .- Document-level Event Coreference Resolution on Trigger Augmentation and Contrastive Learning. .- Dynamic Chain-of-thought for Low-Resource Event Extraction. .- On Sentence-level Non-adversarial Robustness of Chinese Named Entity Recognition with Large Language Models. .- Spatial Relation Classification on Supervised In-Context Learning. .- HGNN2KAN: Distilling hypergraph neural networks into KAN for efficient inference. .- Adapting Task-General ORE Systems for Extracting Open Relations between Fictional Characters in Chinese Novels. .- DRLF:Denoiser-Reinforcement Learning Framework for Entity Completion. .- Fashion-related Attribute Value Extraction with Visual Prompting. .- Discovering Latent Relationship for Temporal Knowledge Graph Reasoning. .- Logical Rule-Constrained Large Language Models for Document-Level Relation Extraction. .- An Adaptive Semantic-Aware Fusion Method for Multimodal Entity Linking. .- Retrieve, Interaction, Fusion: a Simple Approach in Ancient Chinese Named Entity Recognition. .- Reasoning-Guided Prompt Learning with Historical Knowledge Injection for Ancient Chinese Relation Extraction. .- MMD-TKGR: Multi-Agent Multi-Round Debate for Temporal Knowledge Graph Reasoning. .- AutoPRE: Discovering Concept Prerequisites with LLM Agents. .- Weakly-Supervised Generative Framework for Product Attribute Identification in Live-Streaming E-Commerce. .- Exploring Representation-Efficient Transfer Learning Approaches for Speech Recognition and Translation Using Pre-trained Speech Models. .- A Neighborhood Aggregation-based Knowledge Graph Reasoning Approach in Operations and Maintenance. .- CARE: Contextual Augmentation with Retrieval Enhancement for Relation Extraction in Large Language Models. .- RHDG: Retrieval-augmented Heuristics-driven Demonstration Generation for Document-Level Event Argument Extraction. .- Large Language Models and Agents. .- Beyond One-Size-Fits-All: Adaptive Fine-Tuning for LLMs Based on Data Inherent Heterogeneity. .- From Chain to Loop: Improving Reasoning Capability in Small Language Models via Loop-of-Thought. .- TaxBen: Benchmarking the Chinese Tax Knowledge of Large Language Models. .- Propagandistic Meme Detection via Large Language Model Distillation. .- Multi-Candidate Speculative Decoding. .- Debate-Driven Legal Reasoning: Disambiguating Confusing Charges through Multi-Agent Debate. .- A Human-Centered AI Agent Framework with Large Language Models for Academic Research Tasks. .- ReGA: Reasoning and Grounding Decoupled GUI Navigation Agents. .- PSYCHE: Practical Synthetic Math Data Evolution. .- MultiJustice: A Chinese Dataset for Multi-Party, Multi-Charge Legal Prediction. .- Reward-Guided Many-Shot Jailbreaking. .- Self-Prompt Tuning: Enable Autonomous Role-Playing in LLMs. .- RASR: A Multi-Perspective RAG-based Strategy for Semantic Textual Similarity. .- H2HTALK: Evaluating Large Language Models as Emotional Companion. .- EvoP: Robust LLM Inference via Evolutionary Pruning. .- Large Language Model based Multi-Agent Learning for Mixed Cooperative-Competitive Environments. .- EduMate:LLM-Powered Detection of Student Learning Emotions and Efficacy in Semi-Structured Counseling. .- MAD-HD: Multi-Agent Debate-Driven Ungrounded Hallucination Detection. .- TIANWEN: A Comprehensive Benchmark for Evaluating LLMs in Chinese Classical Poetry Understanding and Reasoning. .- RKE-Coder: A LLMs-based Code Generation Framework with Algorithmic and Code Knowledge Integration. .- See Better, Say Better: Vision-Augmented Decoding for Mitigating Hallucinations in Large Vision-Language Models. .- Exploring Large Language Models for Grammar Error Explanation and Correction in Indonesian as a Low-Resource Language. .- Libra: Large Chinese-based Safeguard for AI Content. .- FADERec: Fine-grained Attribute Distillation Enhanced by Collaborative Fusion for LLM-based Recommendation. .- Improving RL Exploration for LLM Reasoning through Retrospective Replay.ReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |
||||