I've coded a small raytracer that renders a scene (based on Peter Shirley's tutorial, I just coded it in Swift). The raytracer itself works fine, outputs a PPM file which is correct. However, I was hoping to enclose this in a UI that will update the picture as each pixel value gets updated during the render. So to that end I made a MacOS app, with a basic model-view architecture.
Here is my model:
//
// RGBViewModel.swift
// rtweekend_gui
//
//
import SwiftUI
// RGB structure to hold color values
struct RGB {
var r: UInt8
var g: UInt8
var b: UInt8
}
// ViewModel to handle the RGB array and updates
class RGBViewModel: ObservableObject {
// Define the dimensions of your 2D array
let width = 1200
let height = 675
// Published property to trigger UI updates
@Published var rgbArray: [[RGB]]
init() {
// Initialize with black pixels
rgbArray = Array(repeating: Array(repeating: RGB(r: 0, g: 0, b: 0), count: width), count: height)
}
func render_scene() {
for j in 0..<height {
for i in 0..<width {
// Generate a random color
let r = UInt8.random(in: 0...255)
let g = UInt8.random(in: 0...255)
let b = UInt8.random(in: 0...255)
// Update on the main thread since this affects the UI
DispatchQueue.main.async {
// Update the array
self.rgbArray[j][i] = RGB(r: r, g: g, b: b)
}
}
}
}
and here is my view:
//
// RGBArrayView.swift
// rtweekend_gui
//
//
import SwiftUI
struct RGBArrayView: View {
// The 2D array of RGB values
@StateObject private var viewModel = RGBViewModel()
// Control the size of each pixel
private let pixelSize: CGFloat = 1
var body: some View {
VStack {
// Display the RGB array
Canvas { context, size in
for y in 0..<viewModel.rgbArray.count {
for x in 0..<viewModel.rgbArray[y].count {
let rgb = viewModel.rgbArray[y][x]
let rect = CGRect(
x: CGFloat(x) * pixelSize,
y: CGFloat(y) * pixelSize,
width: pixelSize,
height: pixelSize
)
context.fill(
Path(rect),
with: .color(Color(
red: Double(rgb.r) / 255.0,
green: Double(rgb.g) / 255.0,
blue: Double(rgb.b) / 255.0
))
)
}
}
}
.border(Color.gray)
// Button to start filling the array
Button("Render") {
viewModel.render_scene()
}
.padding()
}
.padding()
.frame(width: CGFloat(viewModel.width) * pixelSize + 40,
height: CGFloat(viewModel.height) * pixelSize + 80)
}
}
// Preview for SwiftUI
struct RGBArrayView_Previews: PreviewProvider {
static var previews: some View {
RGBArrayView()
}
}
The render does work and the image displays, however, I thought I set it up to show the image updating pixel by pixel and that doesn't happen, the image shows up all at once. What am I doing wrong?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I'm writing a simple app for iOS and I'd like to be able to do some text to speech in it. I have a basic audio manager class with a "speak" function:
import Foundation
import AVFoundation
class AudioManager {
static let shared = AudioManager()
var audioPlayer: AVAudioPlayer?
var isPlaying: Bool {
return audioPlayer?.isPlaying ?? false
}
var playbackPosition: TimeInterval = 0
func playSound(named name: String) {
guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else {
print("Sound file not found")
return
}
do {
if audioPlayer == nil || !isPlaying {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.currentTime = playbackPosition
audioPlayer?.prepareToPlay()
audioPlayer?.play()
} else {
print("Sound is already playing")
}
} catch {
print("Error playing sound: \(error.localizedDescription)")
}
}
func stopSound() {
if let player = audioPlayer {
playbackPosition = player.currentTime
player.stop()
}
}
func speak(text: String) {
let synthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
synthesizer.speak(utterance)
}
}
And my app shows text in a ScrollView:
ScrollView {
Text(self.description)
.padding()
.foregroundColor(.black)
.font(.headline)
.background(Color.gray.opacity(0))
}.onAppear {
AudioManager.shared.speak(text: self.description)
}
However, the text doesn't get read out (in the simulator). I see some output in the console:
Error fetching voices: Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "Invalid container metadata for _UnkeyedDecodingContainer, found keyedGraphEncodingNodeID", underlyingError: nil)). Using fallback voices.
I'm probably doing something wrong here, but not sure what.
Topic:
Media Technologies
SubTopic:
Audio