journal-matchmaker
// Recommend suitable high-impact factor or domain-specific journals for manuscript submission based on abstract content. Trigger when user provides paper abstract and asks for journal recommendations, impact factor matching, or scope alignment suggestions.
$ git log --oneline --stat
stars:1,933
forks:367
updated:March 4, 2026
SKILL.mdreadonly
SKILL.md Frontmatter
namejournal-matchmaker
descriptionRecommend suitable high-impact factor or domain-specific journals for manuscript submission based on abstract content. Trigger when user provides paper abstract and asks for journal recommendations, impact factor matching, or scope alignment suggestions.
version1.0.0
categoryResearch
tags
authorAIPOCH
licenseMIT
statusDraft
risk_levelMedium
skill_typeTool/Script
ownerAIPOCH
reviewer
last_updated2026-02-06
Journal Matchmaker
Analyzes academic paper abstracts to recommend optimal journals for submission, considering impact factors, scope alignment, and domain expertise.
Use Cases
- Find the best-fit journal for a new manuscript
- Identify high-impact factor journals in specific research areas
- Compare journal scopes against paper content
- Discover domain-specific publication venues
Usage
python scripts/main.py --abstract "Your paper abstract text here" [--field "field_name"] [--min-if 5.0] [--count 5]
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
--abstract | str | Yes | - | Paper abstract text to analyze |
--field | str | No | Auto-detect | Research field (e.g., "computer_science", "biology") |
--min-if | float | No | 0.0 | Minimum impact factor threshold |
--max-if | float | No | None | Maximum impact factor (optional) |
--count | int | No | 5 | Number of recommendations to return |
--format | str | No | table | Output format: table, json, markdown |
Examples
# Basic usage
python scripts/main.py --abstract "This paper presents a novel deep learning approach..."
# Specify field and minimum impact factor
python scripts/main.py --abstract "abstract.txt" --field "ai" --min-if 10.0 --count 10
# Output as JSON for integration
python scripts/main.py --abstract "..." --format json
How It Works
- Abstract Analysis: Extracts key terms, methodology, and research focus
- Field Classification: Identifies the primary research domain
- Journal Matching: Compares content against journal scopes and aims
- Impact Factor Filtering: Applies IF constraints if specified
- Ranking: Scores and ranks journals by relevance and impact
Technical Details
- Difficulty: Medium
- Approach: Keyword extraction + journal database matching
- Data Source: Journal metadata from references/journals.json
- Algorithm: TF-IDF + cosine similarity for scope matching
References
references/journals.json- Journal database with impact factors and scopesreferences/fields.json- Research field classificationsreferences/scoring_weights.json- Algorithm tuning parameters
Notes
- Journal database should be updated periodically (quarterly recommended)
- Impact factor data sourced from Journal Citation Reports (JCR)
- Scope descriptions parsed from official journal websites
- For emerging fields, manual curation may be needed
Risk Assessment
| Risk Indicator | Assessment | Level |
|---|---|---|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
Security Checklist
- No hardcoded credentials or API keys
- No unauthorized file system access (../)
- Output does not expose sensitive information
- Prompt injection protections in place
- Input file paths validated (no ../ traversal)
- Output directory restricted to workspace
- Script execution in sandboxed environment
- Error messages sanitized (no stack traces exposed)
- Dependencies audited
Prerequisites
# Python dependencies
pip install -r requirements.txt
Evaluation Criteria
Success Metrics
- Successfully executes main functionality
- Output meets quality standards
- Handles edge cases gracefully
- Performance is acceptable
Test Cases
- Basic Functionality: Standard input → Expected output
- Edge Case: Invalid input → Graceful error handling
- Performance: Large dataset → Acceptable processing time
Lifecycle Status
- Current Stage: Draft
- Next Review Date: 2026-03-06
- Known Issues: None
- Planned Improvements:
- Performance optimization
- Additional feature support