<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Posts on marley.io</title>
    <link>https://marley.io/posts/</link>
    <description>Recent content in Posts on marley.io</description>
    <generator>Hugo -- 0.145.0</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 24 Apr 2025 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://marley.io/posts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Rust-Powered Full-Text Search w/Tantivy</title>
      <link>https://marley.io/posts/rust-fts/</link>
      <pubDate>Thu, 24 Apr 2025 00:00:00 +0000</pubDate>
      <guid>https://marley.io/posts/rust-fts/</guid>
      <description>I built a full-text search engine in Rust using Tantivy to ingest and index Brazil&amp;rsquo;s Diário Oficial da União&amp;ndash;the federal government&amp;rsquo;s daily journal. It scans new publications for key phrases and sends real-time email alerts.</description>
    </item>
    <item>
      <title>Using Goroutines and Channels for High-Throughput S3 Object Hashing</title>
      <link>https://marley.io/posts/go-s3-hasher/</link>
      <pubDate>Tue, 13 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://marley.io/posts/go-s3-hasher/</guid>
      <description>Revisiting the S3 Hasher, but this time in Go! We achieve similar performance to Rust, processing at just around 3 GB/s using Go&amp;rsquo;s built-in concurrency primitives.</description>
    </item>
    <item>
      <title>High-Throughput S3 Object Hashing: 3GB/s with Rust and Tokio</title>
      <link>https://marley.io/posts/rust-concurrent-s3-hashing/</link>
      <pubDate>Tue, 30 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://marley.io/posts/rust-concurrent-s3-hashing/</guid>
      <description>A high-speed S3 object hasher built with Rust, processing at up to 3 GB/s. We use a channel-based architecture to concurrently stream and hash S3 objects. We process a 12 TB dataset in just over an hour, demonstrating Rust&amp;rsquo;s capability for efficient, large-scale data processing.</description>
    </item>
  </channel>
</rss>
