Getting Started with LogFlux: A Complete Guide

Step-by-step guide to integrate LogFlux into your application. Learn how to set up logging, configure SDKs, implement best practices, and monitor your applications effectively.

Welcome to LogFlux! This comprehensive guide will walk you through setting up LogFlux in your application, from initial setup to advanced features. Whether you’re managing logs for a small project or a large-scale distributed system, this guide has you covered.

Quick Start

Step 1: Create Your Account

First, sign up for a free LogFlux account at dashboard.logflux.io/signup. No credit card required.

Step 2: Create an Application

Once logged in, create your first application:

  1. Navigate to the Dashboard
  2. Click “Create Application”
  3. Name your application (e.g., “production-api”)
  4. Select your preferred region (EU or US)
  5. Click “Create”

Step 3: Get Your API Key

After creating your application, you’ll receive an API key. Keep this secure – it’s your application’s authentication credential.

export LOGFLUX_API_KEY="lfx_prod_a1b2c3d4e5f6g7h8i9j0"

Installation

Node.js/JavaScript

npm install @logflux/sdk
const { Logflux } = require('@logflux/sdk');

const logger = new Logflux({
  apiKey: process.env.LOGFLUX_API_KEY,
  application: 'my-app',
  environment: 'production'
});

// Start logging
logger.info('Application started successfully');

Python

pip install logflux
from logflux import Logflux
import os

logger = Logflux(
    api_key=os.environ['LOGFLUX_API_KEY'],
    application='my-app',
    environment='production'
)

# Start logging
logger.info('Application started successfully')

Go

go get github.com/logflux-io/logflux-go
package main

import (
    "os"
    "github.com/logflux-io/logflux-go"
)

func main() {
    logger := logflux.New(logflux.Config{
        APIKey:      os.Getenv("LOGFLUX_API_KEY"),
        Application: "my-app",
        Environment: "production",
    })
    
    // Start logging
    logger.Info("Application started successfully")
}

Basic Logging

Log Levels

LogFlux supports standard log levels:

logger.debug('Detailed debug information');
logger.info('Informational messages');
logger.warn('Warning messages');
logger.error('Error messages');
logger.fatal('Fatal errors that cause application exit');

Structured Logging

Add context to your logs with structured data:

logger.info('User login', {
  userId: '12345',
  email: 'user@example.com',
  loginMethod: 'oauth',
  ipAddress: request.ip
});

logger.error('Payment processing failed', {
  orderId: 'ORD-789',
  amount: 99.99,
  currency: 'USD',
  error: error.message,
  stack: error.stack
});

Contextual Logging

Create logger instances with persistent context:

// Create a logger with request context
const requestLogger = logger.child({
  requestId: req.id,
  userId: req.user.id,
  sessionId: req.session.id
});

// All logs from this logger will include the context
requestLogger.info('Processing request');
requestLogger.debug('Fetching user data');
requestLogger.info('Request completed', { duration: 145 });

Advanced Features

Batch Logging

Optimize performance with batch logging:

const logger = new Logflux({
  apiKey: process.env.LOGFLUX_API_KEY,
  batchSize: 100,        // Send logs in batches of 100
  flushInterval: 5000    // Flush every 5 seconds
});

// Logs are automatically batched and sent efficiently
for (let i = 0; i < 1000; i++) {
  logger.info(`Processing item ${i}`);
}

Error Tracking

Automatically capture and format errors:

try {
  // Your application code
  processPayment(order);
} catch (error) {
  logger.error('Payment processing failed', {
    error: {
      message: error.message,
      stack: error.stack,
      code: error.code
    },
    order: order.toJSON()
  });
}

Performance Metrics

Track performance metrics alongside your logs:

const timer = logger.startTimer();

// Perform operation
await performDatabaseQuery();

timer.done({
  message: 'Database query completed',
  query: 'SELECT * FROM users',
  rows: result.length
});
// Automatically logs with duration

Custom Fields

Add custom fields that appear in all logs:

const logger = new Logflux({
  apiKey: process.env.LOGFLUX_API_KEY,
  defaultFields: {
    service: 'api-gateway',
    version: process.env.APP_VERSION,
    region: process.env.AWS_REGION
  }
});

Using the Inspector CLI

The LogFlux Inspector provides a powerful terminal interface for viewing logs:

Installation

# macOS
brew install logflux/tap/inspector

# Linux
curl -L https://logflux.io/install | sh

# Windows
scoop install logflux-inspector

Basic Usage

# Stream logs in real-time
logflux logs --app my-app --follow

# Search logs
logflux logs --app my-app --search "error"

# Filter by time range
logflux logs --app my-app --since 1h --until now

# Filter by log level
logflux logs --app my-app --level error,warn

# Export logs
logflux export --app my-app --format json > logs.json

Advanced Queries

# Complex search with multiple conditions
logflux query \
  --app my-app \
  --filter 'level:error AND service:payment' \
  --since 24h \
  --limit 100

# Aggregate logs
logflux stats \
  --app my-app \
  --group-by level \
  --interval 1h

Best Practices

1. Use Structured Logging

Instead of:

logger.info(`User ${userId} logged in from ${ipAddress}`);

Do this:

logger.info('User login', {
  userId: userId,
  ipAddress: ipAddress,
  timestamp: new Date().toISOString()
});

2. Include Request Context

Always include request identifiers for tracing:

app.use((req, res, next) => {
  req.logger = logger.child({
    requestId: req.id,
    method: req.method,
    path: req.path
  });
  next();
});

3. Log at the Right Level

  • DEBUG: Detailed information for debugging
  • INFO: General informational messages
  • WARN: Warning messages for potential issues
  • ERROR: Error messages for failures
  • FATAL: Critical errors causing application termination

4. Sanitize Sensitive Data

const sanitizedUser = {
  id: user.id,
  email: user.email.replace(/(.{2}).*(@.*)/, '$1***$2'),
  // Never log passwords, tokens, or credit card numbers
};

logger.info('User registered', { user: sanitizedUser });

5. Use Correlation IDs

Track requests across microservices:

const correlationId = req.headers['x-correlation-id'] || uuid();

const serviceLogger = logger.child({
  correlationId: correlationId,
  service: 'user-service'
});

// Pass correlation ID to downstream services
axios.get('https://api.internal/data', {
  headers: {
    'x-correlation-id': correlationId
  }
});

Monitoring and Alerting

Setting Up Alerts

Configure alerts in the LogFlux dashboard:

  1. Navigate to “Alerts” section
  2. Click “Create Alert”
  3. Define your conditions:
    level:error AND service:payment
    threshold: 5 errors in 5 minutes
    
  4. Configure notification channels (Email, Slack, PagerDuty)

Dashboard Creation

Create custom dashboards for your logs:

  1. Go to “Dashboards”
  2. Click “Create Dashboard”
  3. Add widgets:
    • Log stream
    • Error rate graph
    • Response time histogram
    • Top errors table

Troubleshooting

Common Issues

Logs not appearing:

  • Check your API key is correct
  • Verify network connectivity
  • Ensure logs are being flushed (in batch mode)

Rate limiting:

  • Implement exponential backoff
  • Use batch logging
  • Contact support for limit increases

Missing fields:

  • Check field names (case-sensitive)
  • Verify data types
  • Ensure fields are not null/undefined

Next Steps

Now that you have LogFlux set up, explore these advanced features:

  1. Encryption: Enable field-level encryption for sensitive data
  2. Retention Policies: Configure automatic log archival
  3. Integrations: Connect with your existing tools (Grafana, DataDog, etc.)
  4. API Access: Build custom integrations with our REST API
  5. Compliance: Enable GDPR/HIPAA compliance features

Support

Need help? We’re here for you:

Welcome to LogFlux – happy logging!