Thursday, 31 July 2025

How to Split an MP4 File into 2-Minute Segments Using Python

If you’ve ever worked with large MP4 video files, you might have run into issues with uploading limits or processing performance. Splitting a video into smaller, evenly timed chunks—such as 2-minute segments—can make editing and sharing much easier. In this tutorial, we’ll use Python and the MoviePy library to automate the process.

What You'll Need

  • Python 3 installed
  • MoviePy library: pip install moviepy
  • An MP4 file you'd like to split

How It Works

The script loads your MP4 file, calculates the duration, and slices it into 2-minute segments using MoviePy’s subclip function. Each chunk is saved as a new video file.

Python Script to Split the MP4


from moviepy.editor import VideoFileClip
import math
import os

def split_video(file_path, chunk_duration=120):
    video = VideoFileClip(file_path)
    video_duration = int(video.duration)  # in seconds
    total_chunks = math.ceil(video_duration / chunk_duration)
    
    base_name = os.path.splitext(os.path.basename(file_path))[0]
    output_dir = f"{base_name}_chunks"
    os.makedirs(output_dir, exist_ok=True)

    print(f"Total Duration: {video_duration} seconds")
    print(f"Splitting into {total_chunks} segments of {chunk_duration} seconds each...")

    for i in range(total_chunks):
        start = i * chunk_duration
        end = min(start + chunk_duration, video_duration)
        subclip = video.subclip(start, end)
        output_path = os.path.join(output_dir, f"{base_name}_part{i+1}.mp4")
        subclip.write_videofile(output_path, codec="libx264", audio_codec="aac")
        print(f"Saved: {output_path}")

    print("Splitting completed.")

# Example usage
split_video("your_video.mp4", chunk_duration=120)
  

Output

After running the script, you’ll get a folder named after your video (e.g., my_video_chunks) containing files like:

  • my_video_part1.mp4
  • my_video_part2.mp4
  • ...

Tips

  • For longer or shorter segments, just change the chunk_duration parameter.
  • Ensure your MP4 file is not corrupted and properly encoded with audio.

Monday, 21 July 2025

Why You Should Use Camera Covers for Your iPhone and Computer

Why Camera Covers Matter

In the digital age, privacy is more vulnerable than ever. Hackers and malicious software can gain unauthorized access to your webcam or smartphone camera without your knowledge. A camera cover is a small but powerful physical barrier that blocks the lens when not in use, giving you peace of mind.

Types of Camera Covers

  • Slide Covers: These attach to your device and slide open or closed, allowing you to easily block or unblock the camera.
  • Snap-On Covers: Designed for smartphones, these usually clip over the lens and can be removed when needed.
  • Adhesive Covers: Simple stickers or tabs that can be stuck over the camera and peeled off without residue.
  • Magnetic Covers: More common for laptops, they attach magnetically and provide a clean, reusable option.

Camera Covers for iPhones

iPhones, especially models with advanced Face ID and multiple rear cameras, often raise privacy concerns. While software controls exist, a physical cover for the front-facing lens offers extra protection. Some iPhone covers include a built-in sliding cover for the front camera, while others require a small stick-on piece for the rear camera lens.

Camera Covers for Computers

Laptops and desktops, particularly those with built-in webcams, are prime targets for spying software. Most camera covers for computers are ultra-thin so they don't interfere with screen closing. They’re easy to install, and many are reusable and adjustable. Brands like Logitech, HP, Dell, and Apple don't usually ship laptops with built-in covers, making third-party covers essential.

How to Choose the Right Camera Cover

  • Make sure it’s compatible with your device model.
  • Look for ultra-slim designs to avoid interference with screen closing.
  • Choose covers that are easy to apply and remove without damaging your device.
  • Opt for non-intrusive, minimalist designs for better aesthetics.

Camera Covers on Amazon

Conclusion

Camera covers are an inexpensive, effective way to enhance your digital privacy. Whether you’re working from home, attending virtual meetings, or simply using your device daily, covering your camera helps keep prying eyes away. Invest in a camera cover today—your future self might thank you.

Sunday, 20 July 2025

How to Resume Interrupted Downloads with curl and Python

File downloads can get interrupted due to network issues, system crashes, or accidental terminations. Instead of restarting from scratch, you can resume the download from where it left off. This blog post shows you how to do that using two powerful tools: curl and Python.

1. Resuming Downloads with curl

curl makes it simple to resume an interrupted download using the -C - option.

curl -C - -O https://example.com/largefile.zip

Explanation:

  • -C -: Continue/Resume a previous file transfer at the given offset. The dash (-) tells curl to automatically find the correct byte offset.
  • -O: Saves the file with its original name.

2. Resuming Downloads with Python

In Python, you can use the requests module to achieve similar functionality by setting the Range HTTP header.

Step-by-step Python Script:

import os
import requests

url = 'https://example.com/largefile.zip'
filename = url.split('/')[-1]

# Get existing file size if partially downloaded
resume_header = {}
if os.path.exists(filename):
    existing_size = os.path.getsize(filename)
    resume_header = {'Range': f'bytes={existing_size}-'}
else:
    existing_size = 0

with requests.get(url, headers=resume_header, stream=True) as r:
    mode = 'ab' if existing_size else 'wb'
    with open(filename, mode) as f:
        for chunk in r.iter_content(chunk_size=8192):
            if chunk:
                f.write(chunk)

print(f"Download of '{filename}' complete.")

How It Works:

  • Checks if the file already exists and determines its size.
  • Uses a Range header to request only the remaining bytes.
  • Appends the remaining content to the partially downloaded file.

3. Tips for Reliable Downloads

  • Always verify server supports HTTP range requests (check for Accept-Ranges: bytes in headers).
  • Use try-except blocks for robust error handling in production scripts.

Conclusion

Whether you're scripting downloads for automation or recovering from a failed transfer, both curl and Python provide efficient methods to resume interrupted downloads. Choose the tool that best fits your workflow.

Saturday, 19 July 2025

Download Large Files in Chunks Automatically Using curl and Python

Downloading large files from the internet can be time-consuming and error-prone. One efficient technique is to download the file in smaller parts (chunks) and merge them after completion. In this guide, we’ll show you how to automate and accelerate chunk downloads using curl with parallel threads in Python.

Why Parallel Chunk Downloads?

  • Faster downloads using multiple threads
  • More stable over poor connections
  • Improved control over large files

Requirements

  • Python 3.x
  • curl installed on your system
  • A server that supports HTTP Range requests

Python Script for Parallel Download

Save the following code as parallel_chunk_download.py:

import os
import math
import threading
import subprocess
import requests

def get_file_size(url):
    response = requests.head(url, allow_redirects=True)
    if 'Content-Length' in response.headers:
        return int(response.headers['Content-Length'])
    else:
        raise Exception("Cannot determine file size. Server does not return 'Content-Length'.")

def download_chunk(url, start, end, part_num):
    filename = f"part{part_num:03d}.chunk"
    cmd = ["curl", "-s", "-r", f"{start}-{end}", "-o", filename, url]
    subprocess.run(cmd, check=True)

def merge_chunks(total_parts, output_file):
    with open(output_file, "wb") as out:
        for i in range(total_parts):
            part = f"part{i:03d}.chunk"
            with open(part, "rb") as pf:
                out.write(pf.read())
            os.remove(part)

def main():
    url = input("Enter file URL: ").strip()
    output_file = input("Enter output filename: ").strip()
    chunk_size = 100 * 1024 * 1024  # 100 MB

    total_size = get_file_size(url)
    total_parts = math.ceil(total_size / chunk_size)

    print(f"Total size: {total_size} bytes")
    print(f"Starting parallel download in {total_parts} chunks...")

    threads = []
    for i in range(total_parts):
        start = i * chunk_size
        end = min(start + chunk_size - 1, total_size - 1)
        t = threading.Thread(target=download_chunk, args=(url, start, end, i))
        t.start()
        threads.append(t)

    for t in threads:
        t.join()

    print("Merging chunks...")
    merge_chunks(total_parts, output_file)
    print(f"Download complete: {output_file}")

if __name__ == "__main__":
    main()

How It Works

  1. The script uses requests to find the total file size
  2. Divides the file into 100MB chunks
  3. Spawns a thread for each chunk, each using curl with a specific byte range
  4. Merges all parts after download

Tips

  • Adjust chunk_size for optimal performance
  • To go beyond I/O bottlenecks, use multiprocessing instead of threading
  • For unstable connections, ensure partial downloads are re-attempted

Conclusion

Using Python and curl together allows you to automate and optimize file downloads, especially when working with large files. Parallel chunk downloading is an efficient and scriptable way to speed up your workflow.

Five Ways to Check Internet Speed from the Terminal

Whether you're a system administrator or a curious user, knowing how to test your internet speed from the command line is a powerful skill. Here are five reliable ways to do just that using the terminal.

1. speedtest-cli

speedtest-cli is a Python-based command-line tool that uses Speedtest.net to test your internet speed.

sudo apt install speedtest-cli  # Debian/Ubuntu
speedtest-cli

It will display your ping, download, and upload speeds in a clear and readable format.

2. fast-cli

fast-cli is a simple tool from Fast.com to measure your download speed.

npm install --global fast-cli
fast

This is ideal if you want a lightweight, dependency-free way to check speeds quickly.

3. Using wget

wget is traditionally used for downloading files, but you can use it to estimate download speed by fetching a large file.

wget --output-document=/dev/null http://speedtest.tele2.net/100MB.zip

The output shows the download speed near the end of the process. Cancel after a few seconds if you only want an estimate.

4. Using curl

curl can be used similarly to wget for a quick bandwidth test:

curl -o /dev/null http://speedtest.tele2.net/100MB.zip

Watch the progress bar for speed information in real-time.

5. nload

nload is a real-time bandwidth monitor that visually displays incoming and outgoing traffic.

sudo apt install nload  # Debian/Ubuntu
nload

This doesn’t perform a speed test per se, but it's excellent for monitoring bandwidth while downloading or streaming.

Conclusion

There are multiple ways to check internet speed directly from the terminal depending on your needs. From real-time download tests to graphical bandwidth monitors, the command line gives you great flexibility for network diagnostics.

How to Test Website Availability with ping, curl, and wget

Introduction

Monitoring website availability is a crucial part of system administration, web development, and IT troubleshooting. While there are many sophisticated tools for uptime monitoring, sometimes a quick check using built-in command-line tools is all you need. In this article, we’ll show you how to use ping, curl, and wget to test if a website is up and responsive.

1. Using ping

The ping command checks if a host is reachable by sending ICMP echo requests and measuring the response time.

ping example.com

If the site is reachable, you’ll see replies with response times. Note: Some web servers or firewalls block ICMP traffic, so a failed ping doesn't always mean the site is down.

2. Using curl

curl fetches the content of a URL and is ideal for testing HTTP response codes.

curl -I https://example.com

The -I flag tells curl to fetch only the headers. A successful website usually returns HTTP/1.1 200 OK.

3. Using wget

Like curl, wget can retrieve content from web servers. It's often used for downloading files but also works well for testing availability.

wget --spider https://example.com

The --spider option checks the site’s availability without downloading the content. If the site is reachable, you'll see a “200 OK” or similar status.

Conclusion

With ping, curl, and wget, you have a powerful trio of tools for testing website availability right from your terminal. Whether you're debugging a server issue or writing a simple monitoring script, these commands are quick, effective, and always available.

Five Powerful Uses of the wget Command

Unlock the full potential of your terminal with these practical wget examples.

1. Download a Single File

The most basic use of wget is downloading a file from a given URL:

wget https://example.com/file.zip

This saves the file in your current directory with its original name.

2. Download an Entire Website

You can mirror an entire website for offline viewing:

wget --mirror --convert-links --page-requisites --no-parent https://example.com

This command recursively downloads pages, images, stylesheets, and converts links for local browsing.

3. Resume Interrupted Downloads

If a download was interrupted, you can resume it using the -c flag:

wget -c https://example.com/largefile.iso

This is particularly helpful for large files or slow connections.

4. Download Files from a List

Put URLs in a text file and download them all at once:

wget -i urls.txt

Each line in urls.txt should be a complete URL. Great for batch downloading.

5. Set Download Speed Limits

To avoid hogging bandwidth, limit the download speed:

wget --limit-rate=200k https://example.com/bigfile.zip

This restricts the download speed to 200 KB/s.