Skip to main content

Overview

This guide will walk you through processing your first Auvo task report in just a few minutes. You’ll learn how to upload a CSV file, view filtered results, and export reports.
Make sure you’ve completed the installation steps before proceeding with this guide.

Starting the Application

1

Activate Virtual Environment

First, ensure your virtual environment is activated:
.\venv\Scripts\activate
You should see (venv) in your terminal prompt.
2

Start the Flask Server

Run the application:
python app.py
Wait for the server to start. You should see:
* Running on http://127.0.0.1:5000
3

Open in Browser

Navigate to the application in your web browser:
http://127.0.0.1:5000

Processing Your First File

Understanding Auvo CSV Format

The application expects CSV files exported from Auvo with the following characteristics:
  • Header rows: The first 5 rows are automatically skipped (Auvo exports include metadata)
  • Required column: Must contain a “Relato” (Report) column with task descriptions
  • Supported formats: .csv, .xls, or .xlsx
Ensure your CSV file is exported directly from Auvo without manual modifications. The application skips the first 5 rows to account for Auvo’s export format.

Upload and Process

1

Select Your File

On the home page, click the “Selecionar arquivo” (Select file) button and choose your Auvo export file.Supported formats:
  • .csv - Comma-separated values
  • .xls - Legacy Excel format
  • .xlsx - Modern Excel format
2

Process the Report

Click “Processar Relatório” (Process Report) to start the analysis.The application will:
  1. Read the file and skip the first 5 rows
  2. Extract the “Relato” column
  3. Filter tasks containing any of the configured keywords
  4. Generate statistics about matches
  5. Save results to a temporary file for export
Processing typically takes 1-3 seconds for files with up to 10,000 rows.
3

Review Results

The results page displays:
  • Statistics Dashboard: Total records, tasks found, and occurrence rate
  • Keyword Breakdown: How many times each keyword appeared
  • Filtered Table: Tasks containing the keywords with these columns:
    • Data (Date)
    • Cliente (Client)
    • Endereco (Address)
    • OS Digital (Digital Work Order - clickable link)
    • Relato (Report description)
Use the search box above the table to further filter results dynamically.

Understanding the Results

Statistics Panel

The statistics panel shows three key metrics:
{
  'total': 1523,           # Total records in the uploaded file
  'filtrados': 47,         # Tasks matching keywords
  'percentual': 3.1,       # Percentage of tasks found
  'por_palavra': {         # Breakdown by keyword
    'quebrado': 12,
    'trocar': 18,
    'instalar': 17
  }
}

Example Filtered Task

Here’s what a typical filtered task looks like:
DataClienteEnderecoOS DigitalRelato
15/03/2026Empresa ABCRua Principal, 123LinkEquipamento quebrado, necessário trocar peça
The “OS Digital” column contains clickable links that open the work order directly in Auvo.

Exporting Results

The application offers two export formats:

Excel (.xlsx) Format

Click “Baixar Excel” to download a multi-sheet workbook:Sheet 1: Tarefas Encontradas (Tasks Found)
  • All filtered tasks with 5 columns
  • Formatted for easy reading in Excel
  • Can be further filtered and analyzed
Sheet 2: Estatísticas (Statistics)
  • Total Records
  • Tasks Found
  • Occurrence Rate (%)
  • Generation Date/Time
The export code:
app.py
with pd.ExcelWriter(output, engine='openpyxl') as writer:
    df.to_excel(writer, index=False, sheet_name='Tarefas Encontradas')
    stats_df = pd.DataFrame([
        ['Total de Registros', stats.get('total', 'N/A')],
        ['Tarefas Encontradas', stats.get('filtrados', 'N/A')],
        ['Taxa de Ocorrência (%)', stats.get('percentual', 'N/A')],
        ['Data de Geração', datetime.now().strftime('%d/%m/%Y %H:%M')]
    ], columns=['Métrica', 'Valor'])
    stats_df.to_excel(writer, index=False, sheet_name='Estatísticas')

Customizing Keywords

The default keywords work for common maintenance scenarios, but you can customize them:
1

Navigate to Configuration

Click “Configurações” in the navigation menu.
2

Edit Keywords

You’ll see the current keywords in a text field:
solicitar peça, quebrado, quebrada, quebrados, orçamento, danificada, danificado, trocar cabo, soldar, trocar, instalar
Modify this list to match your needs. For example:
urgente, crítico, parado, sem funcionamento, manutenção preventiva
3

Save Changes

Click “Salvar” (Save) to update the keywords.
Keywords are stored in the session and will reset when you close the browser. The application uses session-based storage:
app.py
session['custom_keywords'] = [k.strip() for k in keywords if k.strip()]
4

Re-process File

Return to the home page and process your file again with the new keywords.

Viewing Processing History

The application maintains a history of your recent processing activities:
  1. Click “Histórico” in the navigation menu
  2. View the last 10 processed files with:
    • File name
    • Processing date and time
    • Number of tasks found
    • Total records processed
app.py
def salvar_historico(filename, stats):
    hist = session.get('historico', [])
    hist.insert(0, {
        'arquivo': filename,
        'data': datetime.now().strftime('%d/%m/%Y %H:%M'),
        'encontrados': stats['filtrados'],
        'total': stats['total']
    })
    session['historico'] = hist[:10]  # Keep last 10 entries
History is session-based and will be cleared when you close your browser.

How the Filtering Works

Understanding the filtering mechanism helps you optimize your keywords:
app.py
def processar_arquivo(file, palavras_chave):
    # Read file (skipping first 5 rows)
    if filename.endswith('.csv'):
        df = pd.read_csv(file, skiprows=5)
    elif filename.endswith(('.xls', '.xlsx')):
        df = pd.read_excel(file, skiprows=5, engine='openpyxl')
    
    # Create regex pattern from keywords
    regex_busca = '|'.join(palavras_chave)  # Creates: 'keyword1|keyword2|keyword3'
    
    # Filter rows where 'Relato' column contains any keyword (case-insensitive)
    coluna_descricao = 'Relato'
    necessidades = df[df[coluna_descricao].astype(str).str.contains(
        regex_busca, 
        case=False,  # Ignore case
        na=False     # Treat NaN as False
    )].copy()
    
    # Return only specific columns
    colunas_resultado = ['Data', 'Cliente', 'Endereco', 'OS Digital', 'Relato']
    return df, necessidades[colunas_resultado]
Key Points:
  • Filtering is case-insensitive (“Quebrado” matches “quebrado”)
  • Uses regex OR logic (matches if ANY keyword is found)
  • NaN values are treated as non-matches
  • Supports partial matches (“trocar” matches “trocar peça”)

Common Use Cases

Use keywords focused on equipment status:
quebrado, danificado, não funciona, parado, com defeito
Filter for part-related tasks:
solicitar peça, trocar peça, substituir, peça danificada, precisa de peça
Identify high-priority items:
urgente, emergência, crítico, prioridade, parado
Track new installations:
instalar, instalação, novo equipamento, implantar, configurar

Next Steps

Configure Keywords

Customize filtering keywords to match your specific needs

Export Reports

Generate Excel or PDF reports for your team

View History

Track your processing activities over time

Advanced Filtering

Combine multiple keywords for precise filtering

Tips for Best Results

Optimize Keywords: Start with broad terms and refine based on results. Common maintenance terms in Portuguese like “quebrado”, “danificado”, and “trocar” work well.
Regular Exports: The temp files are stored with unique IDs. Export your results immediately after processing to avoid session expiration.
Use the Search Box: The results table includes a dynamic search feature. Use it to further narrow down results without reprocessing the file.
Check Statistics: The percentage metric helps you understand if your keywords are too broad (high %) or too narrow (low %). Aim for 3-10% for most use cases.

Build docs developers (and LLMs) love