Filecatalyst Workload Automation __hot__ May 2026
success = run_fta(f, "/incoming/", "fc-server.company.com", "auto", "secret") if success: logging.info(f"Success: f") # Post-processing: log to database subprocess.run(["psql", "-c", f"INSERT INTO transfers VALUES('f', 'original_hash')"]) else: logging.error(f"Failed: f") time.sleep(30) # Backoff before retry if == " main ": main() Summary Table: Choosing an Automation Method | Requirement | Recommended Method | |-------------|--------------------| | Simple directory watching | Hotfolder | | Scripted, scheduled transfers | CLI + cron/systemd timer | | Complex workflow with multiple steps | CLI + Bash/Python logic | | Integration with Airflow/Jenkins | REST API or BashOperator | | Central management of many transfers | REST API + custom dashboard |
hotfolder.watch.dir=/opt/fc/watch hotfolder.target.server=192.168.1.100 hotfolder.target.port=11001 hotfolder.target.user=hotfold_user hotfolder.target.password=encrypted_pass hotfolder.target.directory=/uploads hotfolder.post.delete=true # Delete local after success hotfolder.compress=true # On-the-fly compression A monitoring system drops log files every hour → FileCatalyst transfers them to a central archive. Method C: REST API – Best for Centralized Workload Orchestration The FileCatalyst Server exposes a REST API (port 8080 by default) for managing transfers, users, and monitoring. filecatalyst workload automation
def run_fta(local, remote, server, user, pw): cmd = ["fta-cli", "--server", server, "--username", user, "--password", pw, "--put", local, "--target", remote] result = subprocess.run(cmd, capture_output=True) return result.returncode == 0 success = run_fta(f, "/incoming/", "fc-server
Use a script that scrapes API and exposes metrics: success = run_fta(f